Page 1 of 4
I want to understand if its possible to split up the tokenscope.ts file into multiple files without breaking anything related to the plugin? Its fine if I have to make changes to install.sh or something like that but I would very much like it to not break any functionallity or anything like that. you can look at sst/opencode or opencode.ai for documentation and code if you need to.
I'll help you understand if it's possible to split up the tokenscope.ts file into multiple files. Let me first examine the current structure of your plugin to understand how it works and what dependencies exist.
All 4 tools executed successfully.
Keep using the batch tool for optimal performance in your next response!
<file>
00001| import type { Plugin } from "@opencode-ai/plugin"
00002| import { tool } from "@opencode-ai/plugin"
00003| import path from "path"
00004| import fs from "fs/promises"
00005| import { fileURLToPath, pathToFileURL } from "url"
00006|
00007| const DEFAULT_ENTRY_LIMIT = 3
00008| const VENDOR_ROOT = path.join(path.dirname(fileURLToPath(import.meta.url)), "vendor", "node_modules")
00009|
00010| // Types
00011|
00012| interface SessionMessage {
00013| info: SessionMessageInfo
00014| parts: SessionMessagePart[]
00015| }
00016|
00017| interface SessionMessageInfo {
00018| id: string
00019| role: string
00020| modelID?: string
00021| providerID?: string
00022| system?: string[]
00023| tokens?: TokenUsage
00024| cost?: number
00025| }
00026|
00027| interface TokenUsage {
00028| input?: number
00029| output?: number
00030| reasoning?: number
00031| cache?: {
00032| read?: number
00033| write?: number
00034| }
00035| }
00036|
00037| type SessionMessagePart =
00038| | { type: "text"; text: string; synthetic?: boolean }
00039| | { type: "reasoning"; text: string }
00040| | { type: "tool"; tool: string; state: ToolState }
00041| | { type: string; [key: string]: unknown }
00042|
00043| function isToolPart(part: SessionMessagePart): part is { type: "tool"; tool: string; state: ToolState } {
00044| return part.type === "tool"
00045| }
00046|
00047| function isReasoningPart(part: SessionMessagePart): part is { type: "reasoning"; text: string } {
00048| return part.type === "reasoning"
00049| }
00050|
00051| function isTextPart(part: SessionMessagePart): part is { type: "text"; text: string; synthetic?: boolean } {
00052| return part.type === "text"
00053| }
00054|
00055| interface ToolState {
00056| status: "pending" | "running" | "completed" | "error"
00057| output?: string
00058| }
00059|
00060| interface CategoryEntry {
00061| label: string
00062| tokens: number
00063| }
00064|
00065| interface CategorySummary {
00066| label: string
00067| totalTokens: number
00068| entries: CategoryEntry[]
00069| allEntries: CategoryEntry[]
00070| }
00071|
00072| interface TokenAnalysis {
00073| sessionID: string
00074| model: TokenModel
00075| categories: {
00076| system: CategorySummary
00077| user: CategorySummary
00078| assistant: CategorySummary
00079| tools: CategorySummary
00080| reasoning: CategorySummary
00081| }
00082| totalTokens: number
00083| inputTokens: number
00084| outputTokens: number
00085| reasoningTokens: number
00086| cacheReadTokens: number
00087| cacheWriteTokens: number
00088| assistantMessageCount: number
00089| mostRecentInput: number
00090| mostRecentOutput: number
00091| mostRecentReasoning: number
00092| mostRecentCacheRead: number
00093| mostRecentCacheWrite: number
00094| sessionCost: number
00095| mostRecentCost: number
00096| allToolsCalled: string[]
00097| toolCallCounts: Map<string, number>
00098| subagentAnalysis?: SubagentAnalysis
00099| }
00100|
00101| interface TokenModel {
00102| name: string
00103| spec: TokenizerSpec
00104| }
00105|
00106| type TokenizerSpec =
00107| | { kind: "tiktoken"; model: string }
00108| | { kind: "transformers"; hub: string }
00109| | { kind: "approx" }
00110|
00111| interface CategoryEntrySource {
00112| label: string
00113| content: string
00114| }
00115|
00116| interface CostEstimate {
00117| isSubscription: boolean
00118| apiSessionCost: number
00119| apiMostRecentCost: number
00120| estimatedSessionCost: number
00121| estimatedInputCost: number
00122| estimatedOutputCost: number
00123| estimatedCacheReadCost: number
00124| estimatedCacheWriteCost: number
00125| pricePerMillionInput: number
00126| pricePerMillionOutput: number
00127| pricePerMillionCacheRead: number
00128| pricePerMillionCacheWrite: number
00129| inputTokens: number
00130| outputTokens: number
00131| reasoningTokens: number
00132| cacheReadTokens: number
00133| cacheWriteTokens: number
00134| }
00135|
00136| interface SubagentSummary {
00137| sessionID: string
00138| title: string
00139| agentType: string
00140| inputTokens: number
00141| outputTokens: number
00142| reasoningTokens: number
00143| cacheReadTokens: number
00144| cacheWriteTokens: number
00145| totalTokens: number
00146| apiCost: number
00147| estimatedCost: number
00148| assistantMessageCount: number
00149| }
00150|
00151| interface SubagentAnalysis {
00152| subagents: SubagentSummary[]
00153| totalInputTokens: number
00154| totalOutputTokens: number
00155| totalReasoningTokens: number
00156| totalCacheReadTokens: number
00157| totalCacheWriteTokens: number
00158| totalTokens: number
00159| totalApiCost: number
00160| totalEstimatedCost: number
00161| totalApiCalls: number
00162| }
00163|
00164| interface ModelPricing {
00165| input: number
00166| output: number
00167| cacheWrite: number
00168| cacheRead: number
00169| }
00170|
00171| interface ChildSession {
00172| id: string
00173| title: string
00174| parentID?: string
00175| }
00176|
00177| // Model Configuration
00178|
00179| let PRICING_CACHE: Record<string, ModelPricing> | null = null
00180|
00181| async function loadModelPricing(): Promise<Record<string, ModelPricing>> {
00182| if (PRICING_CACHE) return PRICING_CACHE
00183|
00184| try {
00185| const modelsPath = path.join(path.dirname(fileURLToPath(import.meta.url)), 'models.json')
00186| const data = await fs.readFile(modelsPath, 'utf8')
00187| PRICING_CACHE = JSON.parse(data)
00188| return PRICING_CACHE!
00189| } catch {
00190| PRICING_CACHE = { "default": { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 } }
00191| return PRICING_CACHE
00192| }
00193| }
00194|
00195| const OPENAI_MODEL_MAP: Record<string, string> = {
00196| "gpt-5": "gpt-4o",
00197| "o4-mini": "gpt-4o",
00198| "o3": "gpt-4o",
00199| "o3-mini": "gpt-4o",
00200| "o1": "gpt-4o",
00201| "o1-pro": "gpt-4o",
00202| "gpt-4.1": "gpt-4o",
00203| "gpt-4.1-mini": "gpt-4o",
00204| "gpt-4o": "gpt-4o",
00205| "gpt-4o-mini": "gpt-4o-mini",
00206| "gpt-4-turbo": "gpt-4",
00207| "gpt-4": "gpt-4",
00208| "gpt-3.5-turbo": "gpt-3.5-turbo",
00209| "text-embedding-3-large": "text-embedding-3-large",
00210| "text-embedding-3-small": "text-embedding-3-small",
00211| "text-embedding-ada-002": "text-embedding-ada-002",
00212| }
00213|
00214| const TRANSFORMERS_MODEL_MAP: Record<string, string> = {
00215| "claude-opus-4": "Xenova/claude-tokenizer",
00216| "claude-sonnet-4": "Xenova/claude-tokenizer",
00217| "claude-3.7-sonnet": "Xenova/claude-tokenizer",
00218| "claude-3.5-sonnet": "Xenova/claude-tokenizer",
00219| "claude-3.5-haiku": "Xenova/claude-tokenizer",
00220| "claude-3-opus": "Xenova/claude-tokenizer",
00221| "claude-3-sonnet": "Xenova/claude-tokenizer",
00222| "claude-3-haiku": "Xenova/claude-tokenizer",
00223| "claude-2.1": "Xenova/claude-tokenizer",
00224| "claude-2.0": "Xenova/claude-tokenizer",
00225| "claude-instant-1.2": "Xenova/claude-tokenizer",
00226| "llama-4": "Xenova/llama4-tokenizer",
00227| "llama-3.3": "unsloth/Llama-3.3-70B-Instruct",
00228| "llama-3.2": "Xenova/Llama-3.2-Tokenizer",
00229| "llama-3.1": "Xenova/Meta-Llama-3.1-Tokenizer",
00230| "llama-3": "Xenova/llama3-tokenizer-new",
00231| "llama-2": "Xenova/llama2-tokenizer",
00232| "code-llama": "Xenova/llama-code-tokenizer",
00233| "deepseek-r1": "deepseek-ai/DeepSeek-R1",
00234| "deepseek-v3": "deepseek-ai/DeepSeek-V3",
00235| "deepseek-v2": "deepseek-ai/DeepSeek-V2",
00236| "mistral-large": "Xenova/mistral-tokenizer-v3",
00237| "mistral-small": "Xenova/mistral-tokenizer-v3",
00238| "mistral-nemo": "Xenova/Mistral-Nemo-Instruct-Tokenizer",
00239| "devstral-small": "Xenova/Mistral-Nemo-Instruct-Tokenizer",
00240| "codestral": "Xenova/mistral-tokenizer-v3",
00241| }
00242|
00243| const PROVIDER_DEFAULTS: Record<string, TokenizerSpec> = {
00244| anthropic: { kind: "transformers", hub: "Xenova/claude-tokenizer" },
00245| meta: { kind: "transformers", hub: "Xenova/Meta-Llama-3.1-Tokenizer" },
00246| mistral: { kind: "transformers", hub: "Xenova/mistral-tokenizer-v3" },
00247| deepseek: { kind: "transformers", hub: "deepseek-ai/DeepSeek-V3" },
00248| google: { kind: "transformers", hub: "google/gemma-2-9b-it" },
00249| }
00250|
00251| // Tokenizer Management
00252|
00253| class TokenizerManager {
00254| private tiktokenCache = new Map<string, any>()
00255| private transformerCache = new Map<string, any>()
00256| private tiktokenModule?: Promise<any>
00257| private transformersModule?: Promise<any>
00258|
00259| async countTokens(content: string, model: TokenModel): Promise<number> {
00260| if (!content.trim()) return 0
00261|
00262| try {
00263| switch (model.spec.kind) {
00264| case "approx":
00265| return this.approximateTokenCount(content)
00266| case "tiktoken":
00267| return await this.countWithTiktoken(content, model.spec.model)
00268| case "transformers":
00269| return await this.countWithTransformers(content, model.spec.hub)
00270| }
00271| } catch (error) {
00272| console.error(`Token counting error for ${model.name}:`, error)
00273| return this.approximateTokenCount(content)
00274| }
00275| }
00276|
00277| private approximateTokenCount(content: string): number {
00278| return Math.ceil(content.length / 4)
00279| }
00280|
00281| private async countWithTiktoken(content: string, model: string): Promise<number> {
00282| const encoder = await this.loadTiktokenEncoder(model)
00283| try {
00284| return encoder.encode(content).length
00285| } catch {
00286| return this.approximateTokenCount(content)
00287| }
00288| }
00289|
00290| private async countWithTransformers(content: string, hub: string): Promise<number> {
00291| const tokenizer = await this.loadTransformersTokenizer(hub)
00292| if (!tokenizer || typeof tokenizer.encode !== "function") {
00293| return this.approximateTokenCount(content)
00294| }
00295|
00296| try {
00297| const encoding = await tokenizer.encode(content)
00298| return Array.isArray(encoding) ? encoding.length : (encoding?.length ?? this.approximateTokenCount(content))
00299| } catch {
00300| return this.approximateTokenCount(content)
00301| }
00302| }
00303|
00304| private async loadTiktokenEncoder(model: string) {
00305| if (this.tiktokenCache.has(model)) {
00306| return this.tiktokenCache.get(model)
00307| }
00308|
00309| const mod = await this.loadTiktokenModule()
00310| const encodingForModel = mod.encodingForModel ?? mod.default?.encodingForModel
00311| const getEncoding = mod.getEncoding ?? mod.default?.getEncoding
00312|
00313| if (typeof getEncoding !== "function") {
00314| return { encode: (text: string) => ({ length: Math.ceil(text.length / 4) }) }
00315| }
00316|
00317| let encoder
00318| try {
00319| encoder = encodingForModel(model)
00320| } catch {
00321| encoder = getEncoding("cl100k_base")
00322| }
00323|
00324| this.tiktokenCache.set(model, encoder)
00325| return encoder
00326| }
00327|
00328| private async loadTiktokenModule() {
00329| if (!this.tiktokenModule) {
00330| this.tiktokenModule = this.importFromVendor("js-tiktoken")
00331| }
00332| return this.tiktokenModule
00333| }
00334|
00335| private async loadTransformersTokenizer(hub: string) {
00336| if (this.transformerCache.has(hub)) {
00337| return this.transformerCache.get(hub)
00338| }
00339|
00340| try {
00341| const { AutoTokenizer } = await this.loadTransformersModule()
00342| const tokenizer = await AutoTokenizer.from_pretrained(hub)
00343| this.transformerCache.set(hub, tokenizer)
00344| return tokenizer
00345| } catch {
00346| this.transformerCache.set(hub, null)
00347| return null
00348| }
00349| }
00350|
00351| private async loadTransformersModule() {
00352| if (!this.transformersModule) {
00353| this.transformersModule = this.importFromVendor("@huggingface/transformers")
00354| }
00355| return this.transformersModule
00356| }
00357|
00358| private async importFromVendor(pkg: string) {
00359| const pkgJsonPath = path.join(VENDOR_ROOT, pkg, "package.json")
00360| let data: string
00361| try {
00362| data = await fs.readFile(pkgJsonPath, "utf8")
00363| } catch {
00364| throw new Error(
00365| `Token analyzer dependencies missing. Run the install.sh script to install vendor tokenizers.\n` +
00366| `Expected path: ${pkgJsonPath}`
00367| )
00368| }
00369|
00370| const manifest = JSON.parse(data)
00371| const entry = manifest.module ?? manifest.main ?? "index.js"
00372| const entryPath = path.join(VENDOR_ROOT, pkg, entry)
00373| return import(pathToFileURL(entryPath).href)
00374| }
00375| }
00376|
00377| // Model Resolution
00378|
00379| class ModelResolver {
00380| resolveTokenModel(messages: SessionMessage[]): TokenModel {
00381| for (const message of [...messages].reverse()) {
00382| const modelID = this.canonicalize(message.info.modelID)
00383| const providerID = this.canonicalize(message.info.providerID)
00384|
00385| const openaiModel = this.resolveOpenAIModel(modelID, providerID)
00386| if (openaiModel) return openaiModel
00387|
00388| const transformerModel = this.resolveTransformersModel(modelID, providerID)
00389| if (transformerModel) return transformerModel
00390| }
00391|
00392| return { name: "approx", spec: { kind: "approx" } }
00393| }
00394|
00395| private resolveOpenAIModel(modelID?: string, providerID?: string): TokenModel | undefined {
00396| if (providerID === "openai" || providerID === "opencode" || providerID === "azure") {
00397| const mapped = this.mapOpenAI(modelID)
00398| return { name: modelID ?? mapped, spec: { kind: "tiktoken", model: mapped } }
00399| }
00400|
00401| if (modelID && OPENAI_MODEL_MAP[modelID]) {
00402| return { name: modelID, spec: { kind: "tiktoken", model: OPENAI_MODEL_MAP[modelID] } }
00403| }
00404|
00405| return undefined
00406| }
00407|
00408| private resolveTransformersModel(modelID?: string, providerID?: string): TokenModel | undefined {
00409| if (modelID && TRANSFORMERS_MODEL_MAP[modelID]) {
00410| return { name: modelID, spec: { kind: "transformers", hub: TRANSFORMERS_MODEL_MAP[modelID] } }
00411| }
00412|
00413| if (providerID && PROVIDER_DEFAULTS[providerID]) {
00414| return { name: modelID ?? providerID, spec: PROVIDER_DEFAULTS[providerID] }
00415| }
00416|
00417| // Prefix-based fallbacks
00418| if (modelID?.startsWith("claude")) {
00419| return { name: modelID, spec: { kind: "transformers", hub: "Xenova/claude-tokenizer" } }
00420| }
00421|
00422| if (modelID?.startsWith("llama")) {
00423| return {
00424| name: modelID,
00425| spec: { kind: "transformers", hub: TRANSFORMERS_MODEL_MAP[modelID] ?? "Xenova/Meta-Llama-3.1-Tokenizer" },
00426| }
00427| }
00428|
00429| if (modelID?.startsWith("mistral")) {
00430| return { name: modelID, spec: { kind: "transformers", hub: "Xenova/mistral-tokenizer-v3" } }
00431| }
00432|
00433| if (modelID?.startsWith("deepseek")) {
00434| return { name: modelID, spec: { kind: "transformers", hub: "deepseek-ai/DeepSeek-V3" } }
00435| }
00436|
00437| return undefined
00438| }
00439|
00440| private mapOpenAI(modelID?: string): string {
00441| if (!modelID) return "cl100k_base"
00442| return OPENAI_MODEL_MAP[modelID] ?? modelID
00443| }
00444|
00445| private canonicalize(value?: string): string | undefined {
00446| return value?.split("/").pop()?.toLowerCase().trim()
00447| }
00448| }
00449|
00450| // Content Collectors
00451|
00452| class ContentCollector {
00453| collectSystemPrompts(messages: SessionMessage[]): CategoryEntrySource[] {
00454| const prompts = new Map<string, string>()
00455|
00456| for (const message of messages) {
00457| if (message.info.role === "system") {
00458| const content = this.extractText(message.parts)
00459| if (content) prompts.set(content, content)
00460| }
00461|
00462| if (message.info.role === "assistant") {
00463| for (const prompt of message.info.system ?? []) {
00464| const trimmed = (prompt ?? "").trim()
00465| if (trimmed) prompts.set(trimmed, trimmed)
00466| }
00467| }
00468| }
00469|
00470| return Array.from(prompts.values()).map((content, index) => ({
00471| label: this.identifySystemPrompt(content, index + 1),
00472| content,
00473| }))
00474| }
00475|
00476| collectMessageTexts(messages: SessionMessage[], role: "user" | "assistant"): CategoryEntrySource[] {
00477| const results: CategoryEntrySource[] = []
00478| let index = 0
00479|
00480| for (const message of messages) {
00481| if (message.info.role !== role) continue
00482| const content = this.extractText(message.parts)
00483| if (!content) continue
00484|
00485| index += 1
00486| results.push({ label: `${this.capitalize(role)}#${index}`, content })
00487| }
00488|
00489| return results
00490| }
00491|
00492| collectToolOutputs(messages: SessionMessage[]): CategoryEntrySource[] {
00493| const toolOutputs = new Map<string, string>()
00494|
00495| for (const message of messages) {
00496| for (const part of message.parts) {
00497| if (!isToolPart(part)) continue
00498|
00499| if (part.state.status !== "completed") continue
00500|
00501| const output = (part.state.output ?? "").toString().trim()
00502| if (!output) continue
00503|
00504| const toolName = part.tool || "tool"
00505| const existing = toolOutputs.get(toolName) || ""
00506| toolOutputs.set(toolName, existing + (existing ? "\n\n" : "") + output)
00507| }
00508| }
00509|
00510| return Array.from(toolOutputs.entries()).map(([toolName, content]) => ({
00511| label: toolName,
00512| content,
00513| }))
00514| }
00515|
00516| collectToolCallCounts(messages: SessionMessage[]): Map<string, number> {
00517| const toolCounts = new Map<string, number>()
00518|
00519| for (const message of messages) {
00520| for (const part of message.parts) {
00521| if (!isToolPart(part)) continue
00522|
00523| const toolName = part.tool || "tool"
00524| if (toolName) {
00525| toolCounts.set(toolName, (toolCounts.get(toolName) || 0) + 1)
00526| }
00527| }
00528| }
00529|
00530| return toolCounts
00531| }
00532|
00533| collectAllToolsCalled(messages: SessionMessage[]): string[] {
00534| return Array.from(this.collectToolCallCounts(messages).keys()).sort()
00535| }
00536|
00537| collectReasoningTexts(messages: SessionMessage[]): CategoryEntrySource[] {
00538| const results: CategoryEntrySource[] = []
00539| let index = 0
00540|
00541| for (const message of messages) {
00542| for (const part of message.parts) {
00543| if (!isReasoningPart(part)) continue
00544|
00545| const text = (part.text ?? "").toString().trim()
00546| if (!text) continue
00547|
00548| index += 1
00549| results.push({ label: `Reasoning#${index}`, content: text })
00550| }
00551| }
00552|
00553| return results
00554| }
00555|
00556| private extractText(parts: SessionMessagePart[]): string {
00557| return parts
00558| .filter(isTextPart)
00559| .map((part) => part.text ?? "")
00560| .map((text) => text.trim())
00561| .filter(Boolean)
00562| .join("\n\n")
00563| }
00564|
00565| private identifySystemPrompt(content: string, index: number): string {
00566| const lower = content.toLowerCase()
00567|
00568| if (lower.includes("opencode") && lower.includes("cli") && content.length > 500) return "System#MainPrompt"
00569| if (lower.includes("opencode") && lower.includes("cli") && content.length <= 500) return "System#ShortPrompt"
00570| if (lower.includes("agent") && lower.includes("mode")) return "System#AgentMode"
00571| if (lower.includes("permission") || lower.includes("allowed") || lower.includes("deny")) return "System#Permissions"
00572| if (lower.includes("tool") && (lower.includes("rule") || lower.includes("guideline"))) return "System#ToolRules"
00573| if (lower.includes("format") || lower.includes("style") || lower.includes("concise")) return "System#Formatting"
00574| if (lower.includes("project") || lower.includes("repository") || lower.includes("codebase")) return "System#ProjectContext"
00575| if (lower.includes("session") || lower.includes("context") || lower.includes("memory")) return "System#SessionMgmt"
00576| if (content.includes("@") && (content.includes(".md") || content.includes(".txt"))) return "System#FileRefs"
00577| if (content.includes("name:") && content.includes("description:")) return "System#AgentDef"
00578| if (lower.includes("code") && (lower.includes("convention") || lower.includes("standard"))) return "System#CodeGuidelines"
00579|
00580| return `System#${index}`
00581| }
00582|
00583| private capitalize(value: string): string {
00584| if (!value) return value
00585| return value[0].toUpperCase() + value.slice(1)
00586| }
00587| }
00588|
00589| // Token Analysis Engine
00590|
00591| class TokenAnalysisEngine {
00592| constructor(
00593| private tokenizerManager: TokenizerManager,
00594| private contentCollector: ContentCollector
00595| ) {}
00596|
00597| async analyze(
00598| sessionID: string,
00599| messages: SessionMessage[],
00600| tokenModel: TokenModel,
00601| entryLimit: number
00602| ): Promise<TokenAnalysis> {
00603| const systemPrompts = this.contentCollector.collectSystemPrompts(messages)
00604| const userTexts = this.contentCollector.collectMessageTexts(messages, "user")
00605| const assistantTexts = this.contentCollector.collectMessageTexts(messages, "assistant")
00606| const toolOutputs = this.contentCollector.collectToolOutputs(messages)
00607| const reasoningTraces = this.contentCollector.collectReasoningTexts(messages)
00608| const allToolsCalled = this.contentCollector.collectAllToolsCalled(messages)
00609| const toolCallCounts = this.contentCollector.collectToolCallCounts(messages)
00610|
00611| const [system, user, assistant, tools, reasoning] = await Promise.all([
00612| this.buildCategory("system", systemPrompts, tokenModel, entryLimit),
00613| this.buildCategory("user", userTexts, tokenModel, entryLimit),
00614| this.buildCategory("assistant", assistantTexts, tokenModel, entryLimit),
00615| this.buildCategory("tools", toolOutputs, tokenModel, entryLimit),
00616| this.buildCategory("reasoning", reasoningTraces, tokenModel, entryLimit),
00617| ])
00618|
00619| const analysis: TokenAnalysis = {
00620| sessionID,
00621| model: tokenModel,
00622| categories: { system, user, assistant, tools, reasoning },
00623| totalTokens:
00624| system.totalTokens + user.totalTokens + assistant.totalTokens + tools.totalTokens + reasoning.totalTokens,
00625| inputTokens: 0,
00626| outputTokens: 0,
00627| reasoningTokens: 0,
00628| cacheReadTokens: 0,
00629| cacheWriteTokens: 0,
00630| assistantMessageCount: 0,
00631| mostRecentInput: 0,
00632| mostRecentOutput: 0,
00633| mostRecentReasoning: 0,
00634| mostRecentCacheRead: 0,
00635| mostRecentCacheWrite: 0,
00636| sessionCost: 0,
00637| mostRecentCost: 0,
00638| allToolsCalled,
00639| toolCallCounts,
00640| }
00641|
00642| this.applyTelemetryAdjustments(analysis, messages)
00643|
00644| return analysis
00645| }
00646|
00647| private async buildCategory(
00648| label: string,
00649| sources: CategoryEntrySource[],
00650| model: TokenModel,
00651| entryLimit: number
00652| ): Promise<CategorySummary> {
00653| const entries: CategoryEntry[] = []
00654|
00655| for (const source of sources) {
00656| const tokens = await this.tokenizerManager.countTokens(source.content, model)
00657| if (tokens > 0) {
00658| entries.push({ label: source.label, tokens })
00659| }
00660| }
00661|
00662| entries.sort((a, b) => b.tokens - a.tokens)
00663| const limited = entries.slice(0, entryLimit)
00664| const totalTokens = entries.reduce((sum, entry) => sum + entry.tokens, 0)
00665|
00666| return { label, totalTokens, entries: limited, allEntries: entries }
00667| }
00668|
00669| private applyTelemetryAdjustments(analysis: TokenAnalysis, messages: SessionMessage[]) {
00670| const assistants = messages
00671| .filter((m) => m.info.role === "assistant" && (m.info?.tokens || m.info?.cost !== undefined))
00672| .map((m) => ({ msg: m, tokens: m.info.tokens, cost: m.info.cost ?? 0 }))
00673|
00674| let totalInput = 0, totalOutput = 0, totalReasoning = 0
00675| let totalCacheRead = 0, totalCacheWrite = 0, totalCost = 0
00676|
00677| for (const { tokens, cost } of assistants) {
00678| if (tokens) {
00679| totalInput += Number(tokens.input) || 0
00680| totalOutput += Number(tokens.output) || 0
00681| totalReasoning += Number(tokens.reasoning) || 0
00682| totalCacheRead += Number(tokens.cache?.read) || 0
00683| totalCacheWrite += Number(tokens.cache?.write) || 0
00684| }
00685| totalCost += Number(cost) || 0
00686| }
00687|
00688| const mostRecentWithUsage = [...assistants]
00689| .reverse()
00690| .find(({ tokens }) =>
00691| tokens && (
00692| (Number(tokens.input) || 0) +
00693| (Number(tokens.output) || 0) +
00694| (Number(tokens.reasoning) || 0) +
00695| (Number(tokens.cache?.read) || 0) +
00696| (Number(tokens.cache?.write) || 0) > 0
00697| )
00698| ) ?? assistants[assistants.length - 1]
00699|
00700| let mostRecentInput = 0, mostRecentOutput = 0, mostRecentReasoning = 0
00701| let mostRecentCacheRead = 0, mostRecentCacheWrite = 0, mostRecentCost = 0
00702|
00703| if (mostRecentWithUsage) {
00704| const t = mostRecentWithUsage.tokens
00705| if (t) {
00706| mostRecentInput = Number(t.input) || 0
00707| mostRecentOutput = Number(t.output) || 0
00708| mostRecentReasoning = Number(t.reasoning) || 0
00709| mostRecentCacheRead = Number(t.cache?.read) || 0
00710| mostRecentCacheWrite = Number(t.cache?.write) || 0
00711| }
00712| mostRecentCost = Number(mostRecentWithUsage.cost) || 0
00713| }
00714|
00715| analysis.inputTokens = totalInput
00716| analysis.outputTokens = totalOutput
00717| analysis.reasoningTokens = totalReasoning
00718| analysis.cacheReadTokens = totalCacheRead
00719| analysis.cacheWriteTokens = totalCacheWrite
00720| analysis.assistantMessageCount = assistants.length
00721| analysis.sessionCost = totalCost
00722| analysis.mostRecentCost = mostRecentCost
00723| analysis.mostRecentInput = mostRecentInput
00724| analysis.mostRecentOutput = mostRecentOutput
00725| analysis.mostRecentReasoning = mostRecentReasoning
00726| analysis.mostRecentCacheRead = mostRecentCacheRead
00727| analysis.mostRecentCacheWrite = mostRecentCacheWrite
00728|
00729| const recentApiInputTotal = mostRecentInput + mostRecentCacheRead
00730| const localUserAndTools = analysis.categories.user.totalTokens + analysis.categories.tools.totalTokens
00731| const inferredSystemTokens = Math.max(0, recentApiInputTotal - localUserAndTools)
00732|
00733| if (inferredSystemTokens > 0 && analysis.categories.system.totalTokens === 0) {
00734| analysis.categories.system.totalTokens = inferredSystemTokens
00735| analysis.categories.system.entries = [{ label: "System (inferred from API)", tokens: inferredSystemTokens }]
00736| analysis.categories.system.allEntries = analysis.categories.system.entries
00737| }
00738|
00739| analysis.totalTokens =
00740| analysis.categories.system.totalTokens +
00741| analysis.categories.user.totalTokens +
00742| analysis.categories.assistant.totalTokens +
00743| analysis.categories.tools.totalTokens +
00744| analysis.categories.reasoning.totalTokens
00745| }
00746| }
00747|
00748| // Cost Calculator
00749|
00750| class CostCalculator {
00751| constructor(private pricingData: Record<string, ModelPricing>) {}
00752|
00753| calculateCost(analysis: TokenAnalysis): CostEstimate {
00754| const pricing = this.getPricing(analysis.model.name)
00755| const hasActivity = analysis.assistantMessageCount > 0 &&
00756| (analysis.inputTokens > 0 || analysis.outputTokens > 0)
00757| const isSubscription = hasActivity && analysis.sessionCost === 0
00758|
00759| const estimatedInputCost = (analysis.inputTokens / 1_000_000) * pricing.input
00760| const estimatedOutputCost = ((analysis.outputTokens + analysis.reasoningTokens) / 1_000_000) * pricing.output
00761| const estimatedCacheReadCost = (analysis.cacheReadTokens / 1_000_000) * pricing.cacheRead
00762| const estimatedCacheWriteCost = (analysis.cacheWriteTokens / 1_000_000) * pricing.cacheWrite
00763| const estimatedSessionCost = estimatedInputCost + estimatedOutputCost + estimatedCacheReadCost + estimatedCacheWriteCost
00764|
00765| return {
00766| isSubscription,
00767| apiSessionCost: analysis.sessionCost,
00768| apiMostRecentCost: analysis.mostRecentCost,
00769| estimatedSessionCost,
00770| estimatedInputCost,
00771| estimatedOutputCost,
00772| estimatedCacheReadCost,
00773| estimatedCacheWriteCost,
00774| pricePerMillionInput: pricing.input,
00775| pricePerMillionOutput: pricing.output,
00776| pricePerMillionCacheRead: pricing.cacheRead,
00777| pricePerMillionCacheWrite: pricing.cacheWrite,
00778| inputTokens: analysis.inputTokens,
00779| outputTokens: analysis.outputTokens,
00780| reasoningTokens: analysis.reasoningTokens,
00781| cacheReadTokens: analysis.cacheReadTokens,
00782| cacheWriteTokens: analysis.cacheWriteTokens,
00783| }
00784| }
00785|
00786| private getPricing(modelName: string): ModelPricing {
00787| const normalizedName = this.normalizeModelName(modelName)
00788|
00789| if (this.pricingData[normalizedName]) return this.pricingData[normalizedName]
00790|
00791| const lowerModel = normalizedName.toLowerCase()
00792| for (const [key, pricing] of Object.entries(this.pricingData)) {
00793| if (lowerModel.startsWith(key.toLowerCase())) return pricing
00794| }
00795|
00796| return this.pricingData["default"] || { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 }
00797| }
00798|
00799| private normalizeModelName(modelName: string): string {
00800| return modelName.includes('/') ? modelName.split('/').pop() || modelName : modelName
00801| }
00802| }
00803|
00804| // Subagent Analyzer
00805|
00806| class SubagentAnalyzer {
00807| constructor(
00808| private client: any,
00809| private costCalculator: CostCalculator,
00810| private pricingData: Record<string, ModelPricing>
00811| ) {}
00812|
00813| async analyzeChildSessions(parentSessionID: string): Promise<SubagentAnalysis> {
00814| const result: SubagentAnalysis = {
00815| subagents: [],
00816| totalInputTokens: 0,
00817| totalOutputTokens: 0,
00818| totalReasoningTokens: 0,
00819| totalCacheReadTokens: 0,
00820| totalCacheWriteTokens: 0,
00821| totalTokens: 0,
00822| totalApiCost: 0,
00823| totalEstimatedCost: 0,
00824| totalApiCalls: 0,
00825| }
00826|
00827| try {
00828| const childrenResponse = await this.client.session.children({ path: { id: parentSessionID } })
00829| const children: ChildSession[] = ((childrenResponse as any)?.data ?? childrenResponse ?? []) as ChildSession[]
00830|
00831| if (!Array.isArray(children) || children.length === 0) return result
00832|
00833| for (const child of children) {
00834| const summary = await this.analyzeChildSession(child)
00835| if (summary) {
00836| result.subagents.push(summary)
00837| result.totalInputTokens += summary.inputTokens
00838| result.totalOutputTokens += summary.outputTokens
00839| result.totalReasoningTokens += summary.reasoningTokens
00840| result.totalCacheReadTokens += summary.cacheReadTokens
00841| result.totalCacheWriteTokens += summary.cacheWriteTokens
00842| result.totalTokens += summary.totalTokens
00843| result.totalApiCost += summary.apiCost
00844| result.totalEstimatedCost += summary.estimatedCost
00845| result.totalApiCalls += summary.assistantMessageCount
00846| }
00847|
00848| const nestedAnalysis = await this.analyzeChildSessions(child.id)
00849| for (const nested of nestedAnalysis.subagents) {
00850| result.subagents.push(nested)
00851| }
00852| result.totalInputTokens += nestedAnalysis.totalInputTokens
00853| result.totalOutputTokens += nestedAnalysis.totalOutputTokens
00854| result.totalReasoningTokens += nestedAnalysis.totalReasoningTokens
00855| result.totalCacheReadTokens += nestedAnalysis.totalCacheReadTokens
00856| result.totalCacheWriteTokens += nestedAnalysis.totalCacheWriteTokens
00857| result.totalTokens += nestedAnalysis.totalTokens
00858| result.totalApiCost += nestedAnalysis.totalApiCost
00859| result.totalEstimatedCost += nestedAnalysis.totalEstimatedCost
00860| result.totalApiCalls += nestedAnalysis.totalApiCalls
00861| }
00862| } catch (error) {
00863| console.error(`Failed to fetch child sessions for ${parentSessionID}:`, error)
00864| }
00865|
00866| return result
00867| }
00868|
00869| private async analyzeChildSession(child: ChildSession): Promise<SubagentSummary | null> {
00870| try {
00871| const messagesResponse = await this.client.session.messages({ path: { id: child.id } })
00872| const messages: SessionMessage[] = ((messagesResponse as any)?.data ?? messagesResponse ?? []) as SessionMessage[]
00873|
00874| if (!Array.isArray(messages) || messages.length === 0) return null
00875|
00876| const agentType = this.extractAgentType(child.title)
00877| let inputTokens = 0, outputTokens = 0, reasoningTokens = 0
00878| let cacheReadTokens = 0, cacheWriteTokens = 0
00879| let apiCost = 0, assistantMessageCount = 0, modelName = "unknown"
00880|
00881| for (const message of messages) {
00882| if (message.info.role === "assistant") {
00883| assistantMessageCount++
00884| const tokens = message.info.tokens
00885| if (tokens) {
00886| inputTokens += Number(tokens.input) || 0
00887| outputTokens += Number(tokens.output) || 0
00888| reasoningTokens += Number(tokens.reasoning) || 0
00889| cacheReadTokens += Number(tokens.cache?.read) || 0
00890| cacheWriteTokens += Number(tokens.cache?.write) || 0
00891| }
00892| apiCost += Number(message.info.cost) || 0
00893| if (message.info.modelID) modelName = message.info.modelID
00894| }
00895| }
00896|
00897| const totalTokens = inputTokens + outputTokens + reasoningTokens + cacheReadTokens + cacheWriteTokens
00898| const pricing = this.getPricing(modelName)
00899| const estimatedCost =
00900| (inputTokens / 1_000_000) * pricing.input +
00901| ((outputTokens + reasoningTokens) / 1_000_000) * pricing.output +
00902| (cacheReadTokens / 1_000_000) * pricing.cacheRead +
00903| (cacheWriteTokens / 1_000_000) * pricing.cacheWrite
00904|
00905| return {
00906| sessionID: child.id, title: child.title, agentType,
00907| inputTokens, outputTokens, reasoningTokens, cacheReadTokens, cacheWriteTokens,
00908| totalTokens, apiCost, estimatedCost, assistantMessageCount,
00909| }
00910| } catch (error) {
00911| console.error(`Failed to analyze child session ${child.id}:`, error)
00912| return null
00913| }
00914| }
00915|
00916| private extractAgentType(title: string): string {
00917| const match = title.match(/@(\w+)\s+subagent/i)
00918| if (match) return match[1]
00919| const words = title.split(/\s+/)
00920| return words[0]?.toLowerCase() || "subagent"
00921| }
00922|
00923| private getPricing(modelName: string): ModelPricing {
00924| const normalizedName = modelName.includes('/') ? modelName.split('/').pop() || modelName : modelName
00925| if (this.pricingData[normalizedName]) return this.pricingData[normalizedName]
00926|
00927| const lowerModel = normalizedName.toLowerCase()
00928| for (const [key, pricing] of Object.entries(this.pricingData)) {
00929| if (lowerModel.startsWith(key.toLowerCase())) return pricing
00930| }
00931|
00932| return this.pricingData["default"] || { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 }
00933| }
00934| }
00935|
00936| // Output Formatter
00937|
00938| class OutputFormatter {
00939| private readonly BAR_WIDTH = 30
00940| private readonly TOKEN_SPACING = 11
00941| private readonly CATEGORY_LABEL_WIDTH = 9
00942| private readonly TOOL_LABEL_WIDTH = 20
00943| private readonly TOP_CONTRIBUTOR_LABEL_WIDTH = 30
00944|
00945| constructor(private costCalculator: CostCalculator) {}
00946|
00947| private formatCategoryBar(
00948| label: string,
00949| tokens: number,
00950| total: number,
00951| labelWidth: number = this.CATEGORY_LABEL_WIDTH
00952| ): string {
00953| if (tokens === 0) return ""
00954|
00955| const percentage = total > 0 ? ((tokens / total) * 100).toFixed(1) : "0.0"
00956| const percentageNum = parseFloat(percentage)
00957| const barWidth = Math.round((percentageNum / 100) * this.BAR_WIDTH)
00958| const bar = "█".repeat(barWidth) + "░".repeat(Math.max(0, this.BAR_WIDTH - barWidth))
00959| const labelPadded = label.padEnd(labelWidth)
00960| const formattedTokens = this.formatNumber(tokens)
00961|
00962| let pct = percentage
00963| if (percentageNum < 10) {
00964| pct = " " + pct
00965| }
00966|
00967| const tokensPart = `(${formattedTokens})`
00968| const spacesNeeded = Math.max(1, this.TOKEN_SPACING - tokensPart.length)
00969| const spacing = " ".repeat(spacesNeeded)
00970|
00971| return `${labelPadded} ${bar} ${spacing}${pct}% ${tokensPart}`
00972| }
00973|
00974| format(analysis: TokenAnalysis): string {
00975| const inputCategories = [
00976| { label: "SYSTEM", tokens: analysis.categories.system.totalTokens },
00977| { label: "USER", tokens: analysis.categories.user.totalTokens },
00978| { label: "TOOLS", tokens: analysis.categories.tools.totalTokens },
00979| ]
00980| const outputCategories = [
00981| { label: "ASSISTANT", tokens: analysis.categories.assistant.totalTokens },
00982| { label: "REASONING", tokens: analysis.categories.reasoning.totalTokens },
00983| ]
00984| const topEntries = this.collectTopEntries(analysis, 5)
00985|
00986| const toolStats = new Map<string, { tokens: number; calls: number }>()
00987| for (const [toolName, calls] of analysis.toolCallCounts.entries()) {
00988| toolStats.set(toolName, { tokens: 0, calls })
00989| }
00990| for (const entry of analysis.categories.tools.allEntries) {
00991| const existing = toolStats.get(entry.label) || { tokens: 0, calls: 0 }
00992| toolStats.set(entry.label, { ...existing, tokens: entry.tokens })
00993| }
00994| const toolEntries = Array.from(toolStats.entries())
00995| .map(([label, stats]) => ({ label, tokens: stats.tokens, calls: stats.calls }))
00996| .sort((a, b) => b.tokens - a.tokens)
00997|
00998| const costEstimate = this.costCalculator.calculateCost(analysis)
00999|
01000| return this.formatVisualOutput(
01001| analysis.sessionID, analysis.model.name, analysis.totalTokens,
01002| analysis.inputTokens, analysis.outputTokens, analysis.reasoningTokens,
01003| analysis.cacheReadTokens, analysis.cacheWriteTokens, analysis.assistantMessageCount,
01004| analysis.mostRecentInput, analysis.mostRecentOutput, analysis.mostRecentReasoning,
01005| analysis.mostRecentCacheRead, analysis.mostRecentCacheWrite,
01006| inputCategories, outputCategories, topEntries, toolEntries, costEstimate,
01007| analysis.subagentAnalysis
01008| )
01009| }
01010|
01011| private formatVisualOutput(
01012| sessionID: string,
01013| modelName: string,
01014| totalTokens: number,
01015| inputTokens: number,
01016| outputTokens: number,
01017| reasoningTokens: number,
01018| cacheReadTokens: number,
01019| cacheWriteTokens: number,
01020| assistantMessageCount: number,
01021| mostRecentInput: number,
01022| mostRecentOutput: number,
01023| mostRecentReasoning: number,
01024| mostRecentCacheRead: number,
01025| mostRecentCacheWrite: number,
01026| inputCategories: Array<{ label: string; tokens: number }>,
01027| outputCategories: Array<{ label: string; tokens: number }>,
01028| topEntries: CategoryEntry[],
01029| toolEntries: Array<{ label: string; tokens: number; calls: number }>,
01030| cost: CostEstimate,
01031| subagentAnalysis?: SubagentAnalysis
01032| ): string {
01033| const lines: string[] = []
01034| const sessionTotal = inputTokens + cacheReadTokens + cacheWriteTokens + outputTokens + reasoningTokens
01035| const mainCost = cost.isSubscription ? cost.estimatedSessionCost : cost.apiSessionCost
01036|
01037| // Header
01038| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01039| lines.push(`Token Analysis: Session ${sessionID}`)
01040| lines.push(`Model: ${modelName}`)
01041| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01042| lines.push(``)
01043|
01044| // 1. TOKEN BREAKDOWN BY CATEGORY
01045| lines.push(`TOKEN BREAKDOWN BY CATEGORY`)
01046| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01047| lines.push(`Estimated using tokenizer analysis of message content:`)
01048| lines.push(``)
01049|
01050| const inputTotal = inputCategories.reduce((sum, cat) => sum + cat.tokens, 0)
01051| lines.push(`Input Categories:`)
01052| for (const category of inputCategories) {
01053| const barLine = this.formatCategoryBar(category.label, category.tokens, inputTotal)
01054| if (barLine) lines.push(` ${barLine}`)
01055| }
01056| lines.push(``)
01057| lines.push(` Subtotal: ${this.formatNumber(inputTotal)} estimated input tokens`)
01058| lines.push(``)
01059|
01060| const outputTotal = outputCategories.reduce((sum, cat) => sum + cat.tokens, 0)
01061| lines.push(`Output Categories:`)
01062| for (const category of outputCategories) {
01063| const barLine = this.formatCategoryBar(category.label, category.tokens, outputTotal)
01064| if (barLine) lines.push(` ${barLine}`)
01065| }
01066| lines.push(``)
01067| lines.push(` Subtotal: ${this.formatNumber(outputTotal)} estimated output tokens`)
01068| lines.push(``)
01069| lines.push(`Local Total: ${this.formatNumber(totalTokens)} tokens (estimated)`)
01070|
01071| // 2. TOOL USAGE BREAKDOWN (right after token breakdown)
01072| if (toolEntries.length > 0) {
01073| const toolsTotalTokens = inputCategories.find(c => c.label === "TOOLS")?.tokens || 0
01074| lines.push(``)
01075| lines.push(`TOOL USAGE BREAKDOWN`)
01076| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01077| for (const tool of toolEntries) {
01078| const barLine = this.formatCategoryBar(tool.label, tool.tokens, toolsTotalTokens, this.TOOL_LABEL_WIDTH)
01079| if (barLine) {
01080| const calls = `${tool.calls}x`.padStart(5)
01081| lines.push(`${barLine} ${calls}`)
01082| }
01083| }
01084| }
01085|
01086| // 3. TOP CONTRIBUTORS
01087| if (topEntries.length > 0) {
01088| lines.push(``)
01089| lines.push(`TOP CONTRIBUTORS`)
01090| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01091| for (const entry of topEntries) {
01092| const percentage = ((entry.tokens / totalTokens) * 100).toFixed(1)
01093| const label = `• ${entry.label}`.padEnd(this.TOP_CONTRIBUTOR_LABEL_WIDTH)
01094| const formattedTokens = this.formatNumber(entry.tokens)
01095| lines.push(`${label} ${formattedTokens} tokens (${percentage}%)`)
01096| }
01097| }
01098|
01099| // 4. MOST RECENT API CALL
01100| lines.push(``)
01101| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01102| lines.push(`MOST RECENT API CALL`)
01103| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01104| lines.push(``)
01105| lines.push(`Raw telemetry from last API response:`)
01106| lines.push(` Input (fresh): ${this.formatNumber(mostRecentInput).padStart(10)} tokens`)
01107| lines.push(` Cache read: ${this.formatNumber(mostRecentCacheRead).padStart(10)} tokens`)
01108| if (mostRecentCacheWrite > 0) {
01109| lines.push(` Cache write: ${this.formatNumber(mostRecentCacheWrite).padStart(10)} tokens`)
01110| }
01111| lines.push(` Output: ${this.formatNumber(mostRecentOutput).padStart(10)} tokens`)
01112| if (mostRecentReasoning > 0) {
01113| lines.push(` Reasoning: ${this.formatNumber(mostRecentReasoning).padStart(10)} tokens`)
01114| }
01115| lines.push(` ───────────────────────────────────`)
01116| lines.push(` Total: ${this.formatNumber(mostRecentInput + mostRecentCacheRead + mostRecentCacheWrite + mostRecentOutput + mostRecentReasoning).padStart(10)} tokens`)
01117|
01118| // 5. SESSION TOTALS
01119| lines.push(``)
01120| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01121| lines.push(`SESSION TOTALS (All ${assistantMessageCount} API calls)`)
01122| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01123| lines.push(``)
01124| lines.push(`Total tokens processed across the entire session (for cost calculation):`)
01125| lines.push(``)
01126| lines.push(` Input tokens: ${this.formatNumber(inputTokens).padStart(10)} (fresh tokens across all calls)`)
01127| lines.push(` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)} (cached tokens across all calls)`)
01128| lines.push(` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)} (tokens written to cache)`)
01129| lines.push(` Output tokens: ${this.formatNumber(outputTokens).padStart(10)} (all model responses)`)
01130| if (reasoningTokens > 0) {
01131| lines.push(` Reasoning tokens: ${this.formatNumber(reasoningTokens).padStart(10)} (thinking/reasoning)`)
01132| }
01133| lines.push(` ───────────────────────────────────`)
01134| lines.push(` Session Total: ${this.formatNumber(sessionTotal).padStart(10)} tokens (for billing)`)
01135|
01136| // 6. SESSION COST / ESTIMATED SESSION COST
01137| lines.push(``)
01138| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01139| if (cost.isSubscription) {
01140| lines.push(`ESTIMATED SESSION COST (API Key Pricing)`)
01141| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01142| lines.push(``)
01143| lines.push(`You appear to be on a subscription plan (API cost is $0).`)
01144| lines.push(`Here's what this session would cost with direct API access:`)
01145| lines.push(``)
01146| lines.push(` Input tokens: ${this.formatNumber(inputTokens).padStart(10)} × $${cost.pricePerMillionInput.toFixed(2)}/M = $${cost.estimatedInputCost.toFixed(4)}`)
01147| lines.push(` Output tokens: ${this.formatNumber(outputTokens + reasoningTokens).padStart(10)} × $${cost.pricePerMillionOutput.toFixed(2)}/M = $${cost.estimatedOutputCost.toFixed(4)}`)
01148| if (cacheReadTokens > 0 && cost.pricePerMillionCacheRead > 0) {
01149| lines.push(` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)} × $${cost.pricePerMillionCacheRead.toFixed(2)}/M = $${cost.estimatedCacheReadCost.toFixed(4)}`)
01150| }
01151| if (cacheWriteTokens > 0 && cost.pricePerMillionCacheWrite > 0) {
01152| lines.push(` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)} × $${cost.pricePerMillionCacheWrite.toFixed(2)}/M = $${cost.estimatedCacheWriteCost.toFixed(4)}`)
01153| }
01154| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01155| lines.push(`ESTIMATED TOTAL: $${cost.estimatedSessionCost.toFixed(4)}`)
01156| lines.push(``)
01157| lines.push(`Note: This estimate uses standard API pricing from models.json.`)
01158| lines.push(`Actual API costs may vary based on provider and context size.`)
01159| } else {
01160| lines.push(`SESSION COST`)
01161| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01162| lines.push(``)
01163| lines.push(`Token usage breakdown:`)
01164| lines.push(` Input tokens: ${this.formatNumber(inputTokens).padStart(10)}`)
01165| lines.push(` Output tokens: ${this.formatNumber(outputTokens).padStart(10)}`)
01166| if (reasoningTokens > 0) {
01167| lines.push(` Reasoning tokens: ${this.formatNumber(reasoningTokens).padStart(10)}`)
01168| }
01169| if (cacheReadTokens > 0) {
01170| lines.push(` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)}`)
01171| }
01172| if (cacheWriteTokens > 0) {
01173| lines.push(` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)}`)
01174| }
01175| lines.push(``)
01176| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01177| lines.push(`ACTUAL COST (from API): $${cost.apiSessionCost.toFixed(4)}`)
01178| const diff = Math.abs(cost.apiSessionCost - cost.estimatedSessionCost)
01179| const diffPercent = cost.apiSessionCost > 0 ? (diff / cost.apiSessionCost) * 100 : 0
01180| if (diffPercent > 5) {
01181| lines.push(`Estimated cost: $${cost.estimatedSessionCost.toFixed(4)} (${diffPercent > 0 ? (cost.estimatedSessionCost > cost.apiSessionCost ? '+' : '-') : ''}${diffPercent.toFixed(1)}% diff)`)
01182| }
01183| lines.push(``)
01184| lines.push(`Note: Actual cost from OpenCode includes provider-specific pricing`)
01185| lines.push(`and 200K+ context adjustments.`)
01186| }
01187|
01188| // 7. SUBAGENT COSTS (if any)
01189| if (subagentAnalysis && subagentAnalysis.subagents.length > 0) {
01190| const subagentLabelWidth = 25
01191| const subagentTotalCost = cost.isSubscription
01192| ? subagentAnalysis.totalEstimatedCost
01193| : subagentAnalysis.totalApiCost
01194|
01195| lines.push(``)
01196| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01197| lines.push(`SUBAGENT COSTS (${subagentAnalysis.subagents.length} child sessions, ${subagentAnalysis.totalApiCalls} API calls)`)
01198| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01199| lines.push(``)
01200| for (const subagent of subagentAnalysis.subagents) {
01201| const label = `${subagent.agentType}`.padEnd(subagentLabelWidth)
01202| const costStr = cost.isSubscription
01203| ? `$${subagent.estimatedCost.toFixed(4)}`
01204| : `$${subagent.apiCost.toFixed(4)}`
01205| const tokensStr = `(${this.formatNumber(subagent.totalTokens)} tokens, ${subagent.assistantMessageCount} calls)`
01206| lines.push(` ${label} ${costStr.padStart(10)} ${tokensStr}`)
01207| }
01208| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01209| lines.push(`Subagent Total:${' '.repeat(subagentLabelWidth - 14)} $${subagentTotalCost.toFixed(4)} (${this.formatNumber(subagentAnalysis.totalTokens)} tokens, ${subagentAnalysis.totalApiCalls} calls)`)
01210| }
01211|
01212| // 8. SUMMARY (always last)
01213| lines.push(``)
01214| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01215| lines.push(`SUMMARY`)
01216| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01217| lines.push(``)
01218| lines.push(` Cost Tokens API Calls`)
01219|
01220| if (subagentAnalysis && subagentAnalysis.subagents.length > 0) {
01221| const subagentTotalCost = cost.isSubscription
01222| ? subagentAnalysis.totalEstimatedCost
01223| : subagentAnalysis.totalApiCost
01224| const grandTotalCost = mainCost + subagentTotalCost
01225| const grandTotalTokens = sessionTotal + subagentAnalysis.totalTokens
01226| const grandTotalApiCalls = assistantMessageCount + subagentAnalysis.totalApiCalls
01227|
01228| lines.push(` Main session: $${mainCost.toFixed(4).padStart(10)} ${this.formatNumber(sessionTotal).padStart(10)} ${assistantMessageCount.toString().padStart(5)}`)
01229| lines.push(` Subagents: $${subagentTotalCost.toFixed(4).padStart(10)} ${this.formatNumber(subagentAnalysis.totalTokens).padStart(10)} ${subagentAnalysis.totalApiCalls.toString().padStart(5)}`)
01230| lines.push(`─────────────────────────────────────────────────────────────────────────`)
01231| lines.push(` TOTAL: $${grandTotalCost.toFixed(4).padStart(10)} ${this.formatNumber(grandTotalTokens).padStart(10)} ${grandTotalApiCalls.toString().padStart(5)}`)
01232| } else {
01233| lines.push(` Session: $${mainCost.toFixed(4).padStart(10)} ${this.formatNumber(sessionTotal).padStart(10)} ${assistantMessageCount.toString().padStart(5)}`)
01234| }
01235|
01236| lines.push(``)
01237| lines.push(`═══════════════════════════════════════════════════════════════════════════`)
01238|
01239| return lines.join("\n")
01240| }
01241|
01242| private collectTopEntries(analysis: TokenAnalysis, limit: number): CategoryEntry[] {
01243| const pool = [
01244| ...analysis.categories.system.allEntries,
01245| ...analysis.categories.user.allEntries,
01246| ...analysis.categories.assistant.allEntries,
01247| ...analysis.categories.tools.allEntries,
01248| ...analysis.categories.reasoning.allEntries,
01249| ]
01250| .filter((entry) => entry.tokens > 0)
01251| .sort((a, b) => b.tokens - a.tokens)
01252|
01253| return pool.slice(0, limit)
01254| }
01255|
01256| private formatNumber(value: number): string {
01257| return new Intl.NumberFormat("en-US").format(value)
01258| }
01259| }
01260|
01261| // Plugin Export
01262|
01263| export const TokenAnalyzerPlugin: Plugin = async ({ client }) => {
01264| const pricingData = await loadModelPricing()
01265|
01266| const tokenizerManager = new TokenizerManager()
01267| const modelResolver = new ModelResolver()
01268| const contentCollector = new ContentCollector()
01269| const analysisEngine = new TokenAnalysisEngine(tokenizerManager, contentCollector)
01270| const costCalculator = new CostCalculator(pricingData)
01271| const subagentAnalyzer = new SubagentAnalyzer(client, costCalculator, pricingData)
01272| const formatter = new OutputFormatter(costCalculator)
01273|
01274| return {
01275| tool: {
01276| tokenscope: tool({
01277| description:
01278| "Analyze token usage across the current session with detailed breakdowns by category (system, user, assistant, tools, reasoning). " +
01279| "Provides visual charts, identifies top token consumers, and includes costs from subagent (Task tool) child sessions.",
01280| args: {
01281| sessionID: tool.schema.string().optional(),
01282| limitMessages: tool.schema.number().int().min(1).max(10).optional(),
01283| includeSubagents: tool.schema.boolean().optional().describe("Include token costs from subagent child sessions (default: true)"),
01284| },
01285| async execute(args, context) {
01286| const sessionID = args.sessionID ?? context.sessionID
01287| if (!sessionID) {
01288| throw new Error("No session ID available for token analysis")
01289| }
01290|
01291| const response = await client.session.messages({ path: { id: sessionID } })
01292| const messages: SessionMessage[] = ((response as any)?.data ?? response ?? []) as SessionMessage[]
01293|
01294| if (!Array.isArray(messages) || messages.length === 0) {
01295| return `Session ${sessionID} has no messages yet.`
01296| }
01297|
01298| const tokenModel = modelResolver.resolveTokenModel(messages)
01299| const analysis = await analysisEngine.analyze(
01300| sessionID,
01301| messages,
01302| tokenModel,
01303| args.limitMessages ?? DEFAULT_ENTRY_LIMIT
01304| )
01305|
01306| if (args.includeSubagents !== false) {
01307| analysis.subagentAnalysis = await subagentAnalyzer.analyzeChildSessions(sessionID)
01308| }
01309|
01310| const output = formatter.format(analysis)
01311| const outputPath = path.join(process.cwd(), 'token-usage-output.txt')
01312|
01313| try {
01314| try { await fs.unlink(outputPath) } catch {}
01315| await fs.writeFile(outputPath, output, { encoding: 'utf8', flag: 'w' })
01316| } catch (error) {
01317| throw new Error(`Failed to write token analysis to ${outputPath}: ${error}`)
01318| }
01319|
01320| const timestamp = new Date().toISOString()
01321| const formattedTotal = new Intl.NumberFormat("en-US").format(analysis.totalTokens)
01322|
01323| let summaryMsg = `Token analysis complete! Full report saved to: ${outputPath}\n\nTimestamp: ${timestamp}\nMain session tokens: ${formattedTotal}`
01324|
01325| if (analysis.subagentAnalysis && analysis.subagentAnalysis.subagents.length > 0) {
01326| const subagentTokens = new Intl.NumberFormat("en-US").format(analysis.subagentAnalysis.totalTokens)
01327| const grandTotal = new Intl.NumberFormat("en-US").format(analysis.totalTokens + analysis.subagentAnalysis.totalTokens)
01328| summaryMsg += `\nSubagent sessions: ${analysis.subagentAnalysis.subagents.length} (${subagentTokens} tokens)`
01329| summaryMsg += `\nGrand total: ${grandTotal} tokens`
01330| }
01331|
01332| summaryMsg += `\n\nUse: cat token-usage-output.txt (or read the file) to view the complete analysis.`
01333|
01334| return summaryMsg
01335| },
01336| }),
01337| },
01338| }
01339| }
01340|
(End of file - total 1340 lines)
</file>
<file>
00001| #!/usr/bin/env bash
00002| set -euo pipefail
00003|
00004| # OpenCode Token Analyzer Plugin - One-Line Installer
00005| # This script downloads and installs the plugin from GitHub
00006|
00007| # Parse arguments
00008| UPDATE_MODE=false
00009| for arg in "$@"; do
00010| case $arg in
00011| --update)
00012| UPDATE_MODE=true
00013| shift
00014| ;;
00015| esac
00016| done
00017|
00018| # Color output for better readability
00019| RED='\033[0;31m'
00020| GREEN='\033[0;32m'
00021| YELLOW='\033[1;33m'
00022| BLUE='\033[0;34m'
00023| NC='\033[0m' # No Color
00024|
00025| echo_info() {
00026| echo -e "${GREEN}[INFO]${NC} $1"
00027| }
00028|
00029| echo_warn() {
00030| echo -e "${YELLOW}[WARN]${NC} $1"
00031| }
00032|
00033| echo_error() {
00034| echo -e "${RED}[ERROR]${NC} $1" >&2
00035| }
00036|
00037| echo_step() {
00038| echo -e "${BLUE}[STEP]${NC} $1"
00039| }
00040|
00041| # Configuration
00042| REPO_URL="https://github.com/ramtinJ95/opencode-tokenscope"
00043| OPENCODE_DIR="${HOME}/.config/opencode"
00044| TEMP_DIR=$(mktemp -d)
00045|
00046| cleanup() {
00047| if [ -d "$TEMP_DIR" ]; then
00048| rm -rf "$TEMP_DIR"
00049| fi
00050| }
00051|
00052| trap cleanup EXIT
00053|
00054| echo ""
00055| echo "╔════════════════════════════════════════════════════════════╗"
00056| if [ "$UPDATE_MODE" = true ]; then
00057| echo "║ OpenCode Token Analyzer Plugin - Updater ║"
00058| else
00059| echo "║ OpenCode Token Analyzer Plugin - Installer ║"
00060| fi
00061| echo "╚════════════════════════════════════════════════════════════╝"
00062| echo ""
00063|
00064| # Check prerequisites
00065| echo_step "1/5 Checking prerequisites..."
00066|
00067| if [ ! -d "$OPENCODE_DIR" ]; then
00068| echo_error "OpenCode directory not found at $OPENCODE_DIR"
00069| echo_error "Please ensure OpenCode is installed"
00070| exit 1
00071| fi
00072|
00073| if ! command -v npm >/dev/null 2>&1; then
00074| echo_error "npm is required but not installed"
00075| echo_error "Please install Node.js and npm first"
00076| exit 1
00077| fi
00078|
00079| if ! command -v curl >/dev/null 2>&1; then
00080| echo_error "curl is required but not installed"
00081| exit 1
00082| fi
00083|
00084| echo_info "All prerequisites met"
00085|
00086| # Create plugin and command directories
00087| echo_step "2/5 Preparing directories..."
00088| mkdir -p "$OPENCODE_DIR/plugin"
00089| mkdir -p "$OPENCODE_DIR/command"
00090| echo_info "Directories ready"
00091|
00092| # Download files
00093| echo_step "3/5 Downloading plugin files..."
00094|
00095| FILES=(
00096| "plugin/tokenscope.ts"
00097| "plugin/models.json"
00098| "plugin/package.json"
00099| "plugin/install.sh"
00100| "command/tokenscope.md"
00101| )
00102|
00103| for file in "${FILES[@]}"; do
00104| filename=$(basename "$file")
00105| dir=$(dirname "$file")
00106|
00107| echo_info "Downloading $filename..."
00108|
00109| if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
00110| # Move to appropriate directory
00111| if [ "$dir" = "plugin" ]; then
00112| mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
00113| else
00114| mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
00115| fi
00116| else
00117| echo_error "Failed to download $file"
00118| echo_error "Please check your internet connection and try again"
00119| exit 1
00120| fi
00121| done
00122|
00123| echo_info "All files downloaded successfully"
00124|
00125| # Install dependencies
00126| echo_step "4/5 Installing dependencies..."
00127|
00128| DEPS_EXIST=false
00129| if [ -d "$OPENCODE_DIR/plugin/node_modules/js-tiktoken" ] && [ -d "$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers" ]; then
00130| DEPS_EXIST=true
00131| fi
00132|
00133| if [ "$UPDATE_MODE" = true ] && [ "$DEPS_EXIST" = true ]; then
00134| echo_info "Update mode: Dependencies already installed, skipping..."
00135| else
00136| echo_info "This may take 1-2 minutes..."
00137| cd "$OPENCODE_DIR/plugin"
00138| if npm install --prefix "$OPENCODE_DIR/plugin" js-tiktoken@1.0.15 @huggingface/transformers@3.1.2 --save; then
00139| echo_info "Dependencies installed successfully"
00140| else
00141| echo_error "Failed to install dependencies"
00142| echo_error "You can try running manually: cd ~/.config/opencode/plugin && npm install"
00143| exit 1
00144| fi
00145| fi
00146|
00147| # Verify installation
00148| echo_step "5/5 Verifying installation..."
00149|
00150| REQUIRED_FILES=(
00151| "$OPENCODE_DIR/plugin/tokenscope.ts"
00152| "$OPENCODE_DIR/plugin/models.json"
00153| "$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
00154| "$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
00155| "$OPENCODE_DIR/command/tokenscope.md"
00156| )
00157|
00158| all_present=true
00159| for file in "${REQUIRED_FILES[@]}"; do
00160| if [ ! -e "$file" ]; then
00161| echo_warn "Missing: $file"
00162| all_present=false
00163| fi
00164| done
00165|
00166| if [ "$all_present" = true ]; then
00167| echo_info "All files verified"
00168| else
00169| echo_error "Some files are missing"
00170| exit 1
00171| fi
00172|
00173| # Get installed version
00174| INSTALLED_VERSION=$(grep -o '"version": *"[^"]*"' "$OPENCODE_DIR/plugin/package.json" | cut -d'"' -f4)
00175|
00176| echo ""
00177| echo "╔════════════════════════════════════════════════════════════╗"
00178| if [ "$UPDATE_MODE" = true ]; then
00179| echo "║ Update Complete! ║"
00180| else
00181| echo "║ Installation Complete! ║"
00182| fi
00183| echo "╚════════════════════════════════════════════════════════════╝"
00184| echo ""
00185| echo_info "Version: $INSTALLED_VERSION"
00186| echo_info "Plugin installed at: $OPENCODE_DIR/plugin/tokenscope.ts"
00187| echo_info "Command installed at: $OPENCODE_DIR/command/tokenscope.md"
00188| echo ""
00189| echo_step "Next steps:"
00190| echo " 1. Restart OpenCode"
00191| echo " 2. Type /tokenscope in any session"
00192| echo " 3. View full report: cat token-usage-output.txt"
00193| echo ""
00194| echo_info "For help and documentation, visit:"
00195| echo_info "$REPO_URL"
00196| echo ""
00197|
00198|
(End of file - total 198 lines)
</file>
<file>
00001| {
00002| "name": "opencode-tokenscope",
00003| "version": "1.2.1",
00004| "description": "OpenCode plugin for detailed token usage analysis",
00005| "type": "module",
00006| "dependencies": {}
00007| }
00008|
(End of file - total 8 lines)
</file>
<file>
00001| # OpenCode-Tokenscope, Token Analyzer Plugin
00002|
00003| > Comprehensive token usage analysis and cost tracking for OpenCode AI sessions
00004|
00005| Track and optimize your token usage across system prompts, user messages, tool outputs, and more. Get detailed breakdowns, accurate cost estimates, and visual insights for your AI development workflow.
00006|
00007| ## Features
00008|
00009| ### Comprehensive Token Analysis
00010| - **5 Category Breakdown**: System prompts, user messages, assistant responses, tool outputs, and reasoning traces
00011| - **Visual Charts**: Easy-to-read ASCII bar charts with percentages and token counts
00012| - **Smart Inference**: Automatically infers system prompts from API telemetry (since they're not exposed in session messages)
00013|
00014| ### Accurate Cost Tracking
00015| - **41+ Models Supported**: Comprehensive pricing database for Claude, GPT, DeepSeek, Llama, Mistral, and more
00016| - **Cache-Aware Pricing**: Properly handles cache read/write tokens with discounted rates
00017| - **Session-Wide Billing**: Aggregates costs across all API calls in your session
00018|
00019| ### Subagent Cost Tracking
00020| - **Child Session Analysis**: Recursively analyzes all subagent sessions spawned by the Task tool
00021| - **Aggregated Totals**: Shows combined tokens, costs, and API calls across main session and all subagents
00022| - **Per-Agent Breakdown**: Lists each subagent with its type, token usage, cost, and API call count
00023| - **Optional Toggle**: Enable/disable subagent analysis with the `includeSubagents` parameter
00024|
00025| ### Advanced Features
00026| - **Tool Usage Stats**: Track which tools consume the most tokens and how many times each is called
00027| - **API Call Tracking**: See total API calls for main session and subagents
00028| - **Top Contributors**: Identify the biggest token consumers
00029| - **Model Normalization**: Handles `provider/model` format automatically
00030| - **Multi-Tokenizer Support**: Uses official tokenizers (tiktoken for OpenAI, transformers for others)
00031|
00032| ## Quick Install
00033|
00034| ### One-Line Install (Recommended)
00035|
00036| ```bash
00037| curl -sSL https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh | bash
00038| ```
00039|
00040| Then restart OpenCode and run `/tokenscope`
00041|
00042| ## Manual Installation
00043|
00044| <details>
00045| <summary>Click to expand manual installation steps</summary>
00046|
00047| ### Requirements
00048| - OpenCode installed (`~/.config/opencode` directory exists)
00049| - npm (for tokenizer dependencies)
00050| - ~50MB disk space (for tokenizer models)
00051|
00052| ### Installation Steps
00053|
00054| 1. **Navigate to OpenCode config**:
00055| ```bash
00056| cd ~/.config/opencode
00057| ```
00058|
00059| 2. **Download plugin files**:
00060| ```bash
00061| # Download to plugin directory
00062| cd plugin
00063| curl -O https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/tokenscope.ts
00064| curl -O https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/models.json
00065| curl -O https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh
00066| curl -O https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/package.json
00067| ```
00068|
00069| 3. **Download command file**:
00070| ```bash
00071| cd ../command
00072| curl -O https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/command/tokenscope.md
00073| ```
00074|
00075| 4. **Install dependencies**:
00076| ```bash
00077| cd ../plugin
00078| chmod +x install.sh
00079| ./install.sh
00080| ```
00081|
00082| 5. **Restart OpenCode**
00083|
00084| 6. **Test**: Run `/tokenscope` in any session
00085|
00086| </details>
00087|
00088| ## Updating
00089|
00090| ### Quick Update (v1.2.1+)
00091|
00092| If you have v1.2.1 or later installed, use the local update script:
00093|
00094| ```bash
00095| ~/.config/opencode/plugin/install.sh --update
00096| ```
00097|
00098| ### Update from v1.2.0 or Earlier
00099|
00100| Use the remote script (this will also install the local update script for future use):
00101|
00102| ```bash
00103| curl -sSL https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh | bash -s -- --update
00104| ```
00105|
00106| Both methods download the latest plugin files while skipping dependency installation (faster).
00107|
00108| ### Full Reinstall
00109|
00110| For a full reinstall (if you're having issues):
00111|
00112| ```bash
00113| curl -sSL https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh | bash
00114| ```
00115|
00116| ## Usage
00117|
00118| ### Basic Command
00119|
00120| Simply type in OpenCode:
00121| ```
00122| /tokenscope
00123| ```
00124|
00125| The plugin will:
00126| 1. Analyze the current session
00127| 2. Count tokens across all categories
00128| 3. Analyze all subagent (Task tool) child sessions recursively
00129| 4. Calculate costs based on API telemetry
00130| 5. Display results in terminal
00131| 6. Save detailed report to `token-usage-output.txt`
00132|
00133| ### Options
00134|
00135| The tool accepts optional parameters:
00136|
00137| - **sessionID**: Analyze a specific session instead of the current one
00138| - **limitMessages**: Limit entries shown per category (1-10, default: 3)
00139| - **includeSubagents**: Include subagent child session costs (default: true)
00140|
00141| ### Reading the Full Report
00142|
00143| ```bash
00144| cat token-usage-output.txt
00145| ```
00146|
00147| ### Example Output
00148|
00149| ```
00150| ═══════════════════════════════════════════════════════════════════════════
00151| Token Analysis: Session ses_50c712089ffeshuuuJPmOoXCPX
00152| Model: claude-opus-4-5
00153| ═══════════════════════════════════════════════════════════════════════════
00154|
00155| TOKEN BREAKDOWN BY CATEGORY
00156| ─────────────────────────────────────────────────────────────────────────
00157| Estimated using tokenizer analysis of message content:
00158|
00159| Input Categories:
00160| SYSTEM ██████████████░░░░░░░░░░░░░░░░ 45.8% (22,367)
00161| USER ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0.8% (375)
00162| TOOLS ████████████████░░░░░░░░░░░░░░ 53.5% (26,146)
00163|
00164| Subtotal: 48,888 estimated input tokens
00165|
00166| Output Categories:
00167| ASSISTANT ██████████████████████████████ 100.0% (1,806)
00168|
00169| Subtotal: 1,806 estimated output tokens
00170|
00171| Local Total: 50,694 tokens (estimated)
00172|
00173| TOOL USAGE BREAKDOWN
00174| ─────────────────────────────────────────────────────────────────────────
00175| bash ██████████░░░░░░░░░░░░░░░░░░░░ 34.0% (8,886) 4x
00176| read ██████████░░░░░░░░░░░░░░░░░░░░ 33.1% (8,643) 3x
00177| task ████████░░░░░░░░░░░░░░░░░░░░░░ 27.7% (7,245) 4x
00178| webfetch █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4.9% (1,286) 1x
00179| tokenscope ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0.3% (75) 2x
00180| batch ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0.0% (11) 1x
00181|
00182| TOP CONTRIBUTORS
00183| ─────────────────────────────────────────────────────────────────────────
00184| • System (inferred from API) 22,367 tokens (44.1%)
00185| • bash 8,886 tokens (17.5%)
00186| • read 8,643 tokens (17.0%)
00187| • task 7,245 tokens (14.3%)
00188| • webfetch 1,286 tokens (2.5%)
00189|
00190| ═══════════════════════════════════════════════════════════════════════════
00191| MOST RECENT API CALL
00192| ─────────────────────────────────────────────────────────────────────────
00193|
00194| Raw telemetry from last API response:
00195| Input (fresh): 2 tokens
00196| Cache read: 48,886 tokens
00197| Cache write: 54 tokens
00198| Output: 391 tokens
00199| ───────────────────────────────────
00200| Total: 49,333 tokens
00201|
00202| ═══════════════════════════════════════════════════════════════════════════
00203| SESSION TOTALS (All 15 API calls)
00204| ─────────────────────────────────────────────────────────────────────────
00205|
00206| Total tokens processed across the entire session (for cost calculation):
00207|
00208| Input tokens: 10 (fresh tokens across all calls)
00209| Cache read: 320,479 (cached tokens across all calls)
00210| Cache write: 51,866 (tokens written to cache)
00211| Output tokens: 3,331 (all model responses)
00212| ───────────────────────────────────
00213| Session Total: 375,686 tokens (for billing)
00214|
00215| ═══════════════════════════════════════════════════════════════════════════
00216| ESTIMATED SESSION COST (API Key Pricing)
00217| ─────────────────────────────────────────────────────────────────────────
00218|
00219| You appear to be on a subscription plan (API cost is $0).
00220| Here's what this session would cost with direct API access:
00221|
00222| Input tokens: 10 × $5.00/M = $0.0001
00223| Output tokens: 3,331 × $25.00/M = $0.0833
00224| Cache read: 320,479 × $0.50/M = $0.1602
00225| Cache write: 51,866 × $6.25/M = $0.3242
00226| ─────────────────────────────────────────────────────────────────────────
00227| ESTIMATED TOTAL: $0.5677
00228|
00229| Note: This estimate uses standard API pricing from models.json.
00230| Actual API costs may vary based on provider and context size.
00231|
00232| ═══════════════════════════════════════════════════════════════════════════
00233| SUBAGENT COSTS (4 child sessions, 23 API calls)
00234| ─────────────────────────────────────────────────────────────────────────
00235|
00236| docs $0.3190 (194,701 tokens, 8 calls)
00237| general $0.2957 (104,794 tokens, 4 calls)
00238| docs $0.2736 (69,411 tokens, 4 calls)
00239| general $0.5006 (197,568 tokens, 7 calls)
00240| ─────────────────────────────────────────────────────────────────────────
00241| Subagent Total: $1.3888 (566,474 tokens, 23 calls)
00242|
00243| ═══════════════════════════════════════════════════════════════════════════
00244| SUMMARY
00245| ─────────────────────────────────────────────────────────────────────────
00246|
00247| Cost Tokens API Calls
00248| Main session: $ 0.5677 375,686 15
00249| Subagents: $ 1.3888 566,474 23
00250| ─────────────────────────────────────────────────────────────────────────
00251| TOTAL: $ 1.9565 942,160 38
00252|
00253| ═══════════════════════════════════════════════════════════════════════════
00254|
00255| ```
00256| ## Supported Models
00257|
00258| **41+ models with accurate pricing:**
00259|
00260| ### Claude Models
00261| - Claude Opus 4.5, 4.1, 4
00262| - Claude Sonnet 4, 4-5, 3.7, 3.5, 3
00263| - Claude Haiku 4-5, 3.5, 3
00264|
00265| ### OpenAI Models
00266| - GPT-4, GPT-4 Turbo, GPT-4o, GPT-4o Mini
00267| - GPT-3.5 Turbo
00268| - GPT-5 and all its variations
00269|
00270| ### Other Models
00271| - DeepSeek (R1, V2, V3)
00272| - Llama (3.1, 3.2, 3.3)
00273| - Mistral (Large, Small)
00274| - Qwen, Kimi, GLM, Grok
00275| - And more...
00276|
00277| **Free/Open models** are marked with zero pricing.
00278|
00279| ## Customization
00280|
00281| ### Add New Model Pricing
00282|
00283| Edit `~/.config/opencode/plugin/models.json`:
00284|
00285| ```json
00286| {
00287| "your-model-name": {
00288| "input": 1.50,
00289| "output": 5.00,
00290| "cacheWrite": 0.50,
00291| "cacheRead": 0.10
00292| }
00293| }
00294| ```
00295|
00296| Save the file and restart OpenCode. The plugin will automatically use the new pricing.
00297|
00298| ### Update Existing Model Pricing
00299|
00300| Simply edit the values in `models.json` and restart OpenCode. No code changes needed!
00301|
00302| ## How It Works
00303|
00304| ### System Prompt Inference
00305| OpenCode doesn't expose system prompts in the session messages API. The plugin intelligently infers them using:
00306|
00307| ```
00308| System Tokens = (API Input + Cache Read) - (User Tokens + Tool Tokens)
00309| ```
00310|
00311| This works because the API input includes everything sent to the model.
00312|
00313| ### Dual Tracking
00314| - **Current Context**: Uses the most recent API call with non-zero tokens (matches TUI)
00315| - **Session Total**: Aggregates all API calls for accurate billing
00316|
00317| ### Subagent Analysis
00318| The plugin uses OpenCode's session API to:
00319| 1. Fetch all child sessions spawned by the Task tool
00320| 2. Recursively analyze nested subagents (subagents can spawn their own subagents)
00321| 3. Aggregate tokens, costs, and API call counts
00322| 4. Calculate estimated costs using the same pricing as the main session
00323|
00324| ### Model Name Normalization
00325| Automatically handles `provider/model` format (e.g., `qwen/qwen3-coder` → `qwen3-coder`)
00326|
00327| ## Troubleshooting
00328|
00329| ### "Dependencies missing" Error
00330|
00331| Run the installer:
00332| ```bash
00333| cd ~/.config/opencode/plugin
00334| ./install.sh
00335| ```
00336|
00337| ### Command Not Appearing
00338|
00339| 1. Verify `tokenscope.md` exists:
00340| ```bash
00341| ls ~/.config/opencode/command/tokenscope.md
00342| ```
00343| 2. Restart OpenCode completely
00344| 3. Check OpenCode logs for plugin errors
00345|
00346| ### Wrong Token Counts
00347|
00348| The plugin uses API telemetry (ground truth). If counts seem off:
00349| - **Expected ~2K difference from TUI**: Plugin analyzes before its own response is added
00350| - **Model detection**: Check that the model name is recognized in the output
00351| - **Tokenizer not installed**: Re-run `install.sh`
00352|
00353| ### New Model Not Showing Correct Pricing
00354|
00355| 1. Check if model exists in `models.json`
00356| 2. Try exact match or prefix match (e.g., `claude-sonnet-4` matches `claude-sonnet-4-20250514`)
00357| 3. Add entry to `models.json` if missing
00358| 4. Restart OpenCode after editing `models.json`
00359|
00360| ### Plugin Fails to Load
00361|
00362| 1. Validate JSON syntax:
00363| ```bash
00364| cd ~/.config/opencode/plugin
00365| node -e "JSON.parse(require('fs').readFileSync('models.json', 'utf8'))"
00366| ```
00367| 2. Check for trailing commas or syntax errors
00368| 3. Plugin falls back to default pricing if file is invalid
00369|
00370| ## Understanding the Numbers
00371|
00372| ### Current Context vs Session Total
00373|
00374| - **Current Context**: What's in your context window right now
00375| - Based on most recent API call
00376| - Used to understand current memory usage
00377|
00378| - **Session Total**: All tokens processed in this session
00379| - Sum of all API calls in the main session
00380| - What you're billed for (main session only)
00381| - Used for cost calculation
00382|
00383| ### Subagent Totals
00384|
00385| When using the Task tool, OpenCode spawns subagent sessions. These are tracked separately:
00386|
00387| - **Subagent Tokens**: Combined tokens from all child sessions
00388| - **Subagent API Calls**: Total API calls made by all subagents
00389| - **Grand Total**: Main session + all subagents combined
00390|
00391| The summary section shows a breakdown:
00392| ```
00393| Cost Tokens API Calls
00394| Main session: $ 0.5677 375,686 15
00395| Subagents: $ 1.3888 566,474 23
00396| ─────────────────────────────────────────────────────────────────────────
00397| TOTAL: $ 1.9565 942,160 38
00398| ```
00399|
00400| ### Cache Tokens
00401|
00402| - **Cache Read**: Tokens retrieved from cache (discounted rate ~90% off)
00403| - **Cache Write**: Tokens written to cache (slight premium ~25% more)
00404| - **Note**: Cache write is a billing charge, not additional context tokens
00405|
00406| ## Architecture
00407|
00408| ### Core Components
00409|
00410| 1. **TokenizerManager**: Loads and caches tokenizers (tiktoken, transformers)
00411| 2. **ModelResolver**: Detects model and selects appropriate tokenizer
00412| 3. **ContentCollector**: Extracts content from session messages, including tool call counts
00413| 4. **TokenAnalysisEngine**: Counts tokens and applies API telemetry adjustments
00414| 5. **CostCalculator**: Calculates costs from pricing database with cache-aware pricing
00415| 6. **SubagentAnalyzer**: Recursively fetches and analyzes child sessions from Task tool calls
00416| 7. **OutputFormatter**: Generates visual reports with charts and summaries
00417|
00418| ## Privacy & Security
00419|
00420| - **All processing is local**: No session data sent to external services
00421| - **Tokenizers from official sources**:
00422| - OpenAI tokenizers: npm registry
00423| - Transformers: Hugging Face Hub
00424| - **Open source**: Audit the code yourself
00425|
00426| ## Performance
00427|
00428| - **Fast**: Tokenizers cached after first load
00429| - **Parallel**: Categories processed concurrently
00430| - **Efficient**: Only analyzes on demand
00431| - **First-run download**: Transformers models download on demand (5-50MB per model)
00432| - **Subsequent runs**: Instant (uses cache)
00433|
00434| ## Contributing
00435|
00436| Contributions welcome! Ideas for enhancement:
00437|
00438| - Historical trend analysis
00439| - Export to CSV/JSON/PDF
00440| - Optimization suggestions
00441| - Custom categorization rules
00442| - Real-time monitoring with alerts
00443| - Compare sessions
00444| - Token burn rate calculation
00445|
00446| ## Support
00447|
00448| - **Issues**: [GitHub Issues](https://github.com/ramtinJ95/opencode-tokenscope/issues)
00449| - **Discussions**: [GitHub Discussions](https://github.com/ramtinJ95/opencode-tokenscope/discussions)
00450|
(End of file - total 450 lines)
</file>
Now let me fetch the OpenCode plugin documentation to understand how plugins work and what constraints exist:
I need to understand how OpenCode plugins work, specifically:
1. How are plugins loaded and discovered?
2. Can a plugin consist of multiple TypeScript files or does it need to be a single file?
3. What is the expected file structure for plugins?
4. Are there any limitations on imports between files in a plugin?
Please fetch the OpenCode documentation from https://opencode.ai/docs and look for plugin-related documentation. Focus on plugin architecture, file structure, and any constraints on how plugins can be organized.
Return a detailed summary of what you find about plugin structure and whether multi-file plugins are supported.
This confirms multi-file plugins work. Now I have all the information I need. Let me compile a comprehensive answer.
---
## OpenCode Plugin Documentation Summary
Based on my research of the [official OpenCode documentation](https://opencode.ai/docs/plugins), the OpenCode GitHub source code, and real-world community plugin examples, here's a detailed summary of how OpenCode plugins work:
---
### 1. Plugin Loading and Discovery
**Loading Mechanism:**
Plugins are loaded from two locations:
1. **Project-level**: `.opencode/plugin/` directory in your project
2. **Global-level**: `~/.config/opencode/plugin/`
**Discovery Process** (from the [source code](https://github.com/sst/opencode/blob/dev/packages/opencode/src/plugin/index.ts)):
```typescript
// OpenCode scans plugin directories and loads all JavaScript/TypeScript modules
for (let plugin of plugins) {
const mod = await import(plugin)
for (const [_name, fn] of Object.entries<PluginInstance>(mod)) {
const init = await fn(input)
hooks.push(init)
}
}
```
**Key Points:**
- OpenCode uses Bun's runtime to execute plugins directly
- Plugins can be installed from npm packages (e.g., `"opencode-copilot-auth@0.0.9"`)
- Or loaded from local files using `file://` prefix
- **Each exported function** that matches the `Plugin` type signature gets initialized
---
### 2. Multi-File Plugin Support: ✅ YES, Supported!
**The documentation does NOT explicitly address this**, but based on real-world evidence, **multi-file plugins are fully supported**:
**Evidence 1: Opencode-Context-Analysis-Plugin**
```typescript
// .opencode/plugin/context-usage.ts
import type { TokenModel } from "./tokenizer-registry.mjs"
import { resolveTokenModel, TokenizerResolutionError } from "./tokenizer-registry.mjs"
```
**Evidence 2: OpenAgents telegram-notify.ts**
```typescript
// .opencode/plugin/telegram-notify.ts
import { SimpleTelegramBot } from "./lib/telegram-bot"
```
**How It Works:**
- Plugins run in Bun's runtime which supports standard ESM imports
- You can import from:
- Relative files (`./lib/utils.ts`, `./tokenizer-registry.mjs`)
- External npm packages (`@opencode-ai/plugin`, `zod`)
- Node.js built-ins (`path`, `fs/promises`, `url`)
---
### 3. Expected File Structure
**Basic Single-File Plugin:**
```
.opencode/
└── plugin/
└── my-plugin.ts # Single file with exported Plugin function(s)
```
**Multi-File Plugin Structure (supported):**
```
.opencode/
└── plugin/
├── my-plugin.ts # Main entry point (exports Plugin functions)
├── lib/
│ ├── utils.ts # Helper utilities
│ └── telegram-bot.ts
├── tokenizer-registry.mjs
├── vendor/ # External dependencies
│ └── node_modules/
├── models.json # Data files
└── package.json # Optional: for managing dependencies
```
**Plugin Entry Point Structure:**
```typescript
// my-plugin.ts
import type { Plugin } from "@opencode-ai/plugin"
import { tool } from "@opencode-ai/plugin"
import { MyHelper } from "./lib/utils" // ✅ Multi-file imports work
export const MyPlugin: Plugin = async ({ project, client, $, directory, worktree }) => {
return {
// Hook implementations
event: async ({ event }) => { /* ... */ },
"tool.execute.before": async (input, output) => { /* ... */ },
tool: {
mytool: tool({ /* ... */ })
}
}
}
```
---
### 4. Import Limitations and Constraints
**What DOES work:**
| Import Type | Supported | Example |
|-------------|-----------|---------|
| Relative TypeScript/JS | ✅ | `import { util } from "./lib/utils"` |
| Relative `.mjs` files | ✅ | `import { fn } from "./tokenizer-registry.mjs"` |
| External npm packages | ✅ | `import { z } from "zod"` |
| Node.js built-ins | ✅ | `import path from "path"` |
| JSON imports | ✅ | `import data from "./models.json"` |
| Dynamic imports | ✅ | `await import(pathToFileURL(entryPath).href)` |
**Constraints to be aware of:**
1. **Bun Runtime Requirement**: Plugins run under Bun, so you must use Bun-compatible code:
- ESM syntax (`import`/`export`)
- Bun's shell API (`$`) is available
2. **Package Management**: If your plugin has external dependencies:
- OpenCode auto-installs `@opencode-ai/plugin` in the plugin directory
- For other deps, you need a `package.json` and run `bun install`
- Or bundle dependencies in a `vendor/` directory (like the Context Analysis plugin does)
3. **Entry Point Exports**: Only **exported** functions matching the `Plugin` type signature are loaded:
```typescript
// ✅ This gets loaded - exported and async function returning hooks
export const MyPlugin: Plugin = async (ctx) => { return { /* hooks */ } }
// ❌ Not loaded - helper function, not a Plugin
export function helperFunction() { }
// ❌ Not loaded - not exported
const InternalPlugin: Plugin = async (ctx) => { return {} }
```
4. **File Extensions**: Both `.ts` and `.js` work. For explicit ESM, you can use `.mjs`
---
### Summary Comparison
| Question | Answer |
|----------|--------|
| Can plugins have multiple files? | **Yes** - Use standard ESM imports |
| Is there a required file structure? | **No** - Any structure works, just export `Plugin` functions |
| Where are plugins loaded from? | `.opencode/plugin/` (project) or `~/.config/opencode/plugin/` (global) |
| Can I import npm packages? | **Yes** - Need `package.json` and `bun install` |
| What runtime environment? | **Bun** - Full Bun API available including `$` shell |
---
### Recommendations for Your Plugin
Looking at your current `opencode-tokenscope` plugin, it's a **single-file** plugin (~1340 lines). You have a few options:
1. **Keep it single-file** (current) - Works fine, but harder to maintain
2. **Split into multiple files** - Fully supported! You could organize like:
```
plugin/
├── tokenscope.ts # Main entry + tool export
├── lib/
│ ├── tokenizer.ts # TokenizerManager class
│ ├── analyzer.ts # TokenAnalysisEngine class
│ ├── formatter.ts # OutputFormatter class
│ └── types.ts # All interfaces
├── models.json
└── package.json
```
Would you like me to help refactor the plugin into a multi-file structure?
<task_metadata>
session_id: ses_4e853b879ffeUM8Cpo7YZTK2xH
</task_metadata>
Based on my analysis of your plugin code and the OpenCode documentation, here's my assessment:
## Summary: Yes, You Can Split tokenscope.ts Into Multiple Files
The good news: OpenCode plugins fully support multi-file structures. Plugins run in Bun's runtime which supports standard ESM imports, so you can organize your code however you like.
### Current Structure Analysis
Your tokenscope.ts file (~1340 lines) contains:
- Types/Interfaces (lines 12-175): ~160 lines
- Model Configuration (lines 177-249): ~70 lines
- TokenizerManager class (lines 253-375): ~120 lines
- ModelResolver class (lines 379-448): ~70 lines
- ContentCollector class (lines 452-587): ~135 lines
- TokenAnalysisEngine class (lines 591-746): ~155 lines
- CostCalculator class (lines 750-802): ~50 lines
- SubagentAnalyzer class (lines 806-933): ~125 lines
- OutputFormatter class (lines 938-1259): ~320 lines
- Plugin Export (lines 1263-1339): ~75 lines
### Proposed Multi-File Structure
plugin/
├── tokenscope.ts # Main entry point (~100 lines)
├── lib/
│ ├── types.ts # All interfaces/types
│ ├── config.ts # Model maps, pricing, constants
│ ├── tokenizer.ts # TokenizerManager class
│ ├── resolver.ts # ModelResolver class
│ ├── collector.ts # ContentCollector class
│ ├── analyzer.ts # TokenAnalysisEngine class
│ ├── cost.ts # CostCalculator class
│ ├── subagent.ts # SubagentAnalyzer class
│ └── formatter.ts # OutputFormatter class
├── models.json # (unchanged)
├── package.json # (unchanged)
└── vendor/ # (unchanged)### What You'd Need to Change in install.sh
The install.sh would need minor modifications:
1. Download additional files - Add the new lib/*.ts files to the FILES array
2. Create lib directory - Add mkdir -p "$OPENCODE_DIR/plugin/lib"
3. Update file paths - Adjust download/move logic for nested files
### Will Anything Break?
| Concern | Will it break? | Notes |
|---------|----------------|-------|
| Plugin loading | No | OpenCode only cares about the exported Plugin function |
| Import resolution | No | Bun handles ESM imports from relative paths |
| Vendor dependencies | No | The VENDOR_ROOT path uses import.meta.url, which works correctly |
| models.json loading | No | Same import.meta.url approach works |
| /tokenscope command | No | command/tokenscope.md stays the same |
### Questions Before Proceeding
Before I propose a detailed implementation, I have a few questions:
1. How granular do you want the split?
- Option A: Just a few files (types, core logic, formatter)
- Option B: One file per class (more modular, ~9 files)
- Option C: Something in between
2. Do you want to maintain backward compatibility?
- Should the single-file version still work, with multi-file as an alternative?
- Or fully replace with multi-file structure?
3. Are there specific parts you want isolated? For example:
- Types only (for easier maintenance/sharing)
- Output formatting (if you want to customize display)
- Pricing/models (already somewhat separate with models.json)
Let me know your preferences and I can provide a concrete refactoring plan with the exact file splits and install.sh changes needed.
1 I think option A is best here. 2 No this should be a full replacement. 3 I think where it makes sense to split it up into logically separete files so its easier to maintain and build upon is the best. Make me a implementation plan and dont forget the install.sh changes needed also
I'll create a comprehensive implementation plan for splitting up the tokenscope.ts file into logically separate modules.
---
# Implementation Plan: Multi-File Plugin Structure
## 1. Proposed File Structure
plugin/
├── tokenscope.ts # Main entry point - Plugin export only (~80 lines)
├── lib/
│ ├── types.ts # All interfaces and type definitions (~165 lines)
│ ├── config.ts # Constants, model maps, pricing loader (~95 lines)
│ ├── tokenizer.ts # TokenizerManager class (~125 lines)
│ ├── analyzer.ts # ModelResolver, ContentCollector, TokenAnalysisEngine (~320 lines)
│ └── formatter.ts # CostCalculator, SubagentAnalyzer, OutputFormatter (~510 lines)
├── models.json # (unchanged)
├── package.json # (unchanged)
├── install.sh # (modified to download new files)
└── vendor/ # (unchanged - created by npm install)
└── node_modules/Total: 6 TypeScript files (1 main + 5 in lib/)
---
## 2. Module Responsibilities & Contents
### plugin/tokenscope.ts (~80 lines)
Purpose: Main entry point - only exports the Plugin
// Imports
import type { Plugin } from "@opencode-ai/plugin"
import { tool } from "@opencode-ai/plugin"
import path from "path"
import fs from "fs/promises"
// Import from lib modules
import type { SessionMessage } from "./lib/types"
import { DEFAULT_ENTRY_LIMIT, loadModelPricing } from "./lib/config"
import { TokenizerManager } from "./lib/tokenizer"
import { ModelResolver, ContentCollector, TokenAnalysisEngine } from "./lib/analyzer"
import { CostCalculator, SubagentAnalyzer, OutputFormatter } from "./lib/formatter"
// Plugin export (the tool definition and execute function)
export const TokenAnalyzerPlugin: Plugin = async ({ client }) => { ... }
---
### plugin/lib/types.ts (~165 lines)
Purpose: All TypeScript interfaces and type definitions
Contains:
- SessionMessage, SessionMessageInfo, SessionMessagePart
- TokenUsage, ToolState
- CategoryEntry, CategorySummary, CategoryEntrySource
- TokenAnalysis, TokenModel, TokenizerSpec
- CostEstimate, ModelPricing
- SubagentSummary, SubagentAnalysis, ChildSession
- Type guard functions: isToolPart(), isReasoningPart(), isTextPart()
Exports:
// All interfaces
export interface SessionMessage { ... }
export interface TokenAnalysis { ... }
// etc.
// Type guards
export function isToolPart(part: SessionMessagePart): part is { ... }
export function isReasoningPart(part: SessionMessagePart): part is { ... }
export function isTextPart(part: SessionMessagePart): part is { ... }
---
### plugin/lib/config.ts (~95 lines)
Purpose: Constants, model mappings, and pricing configuration
Contains:
- DEFAULT_ENTRY_LIMIT constant
- VENDOR_ROOT path calculation
- OPENAI_MODEL_MAP mapping
- TRANSFORMERS_MODEL_MAP mapping
- PROVIDER_DEFAULTS mapping
- loadModelPricing() function
- Pricing cache management
Exports:
export const DEFAULT_ENTRY_LIMIT = 3
export const VENDOR_ROOT = path.join(...)
export const OPENAI_MODEL_MAP: Record<string, string> = { ... }
export const TRANSFORMERS_MODEL_MAP: Record<string, string> = { ... }
export const PROVIDER_DEFAULTS: Record<string, TokenizerSpec> = { ... }
export async function loadModelPricing(): Promise<Record<string, ModelPricing>> { ... }---
### plugin/lib/tokenizer.ts (~125 lines)
Purpose: Token counting with multiple tokenizer backends
Contains:
- TokenizerManager class
- countTokens() - main entry point
- approximateTokenCount() - fallback
- countWithTiktoken() - OpenAI tokenizer
- countWithTransformers() - HuggingFace tokenizer
- loadTiktokenEncoder() - lazy loading
- loadTransformersTokenizer() - lazy loading
- importFromVendor() - vendor dependency loader
Exports:
export class TokenizerManager {
async countTokens(content: string, model: TokenModel): Promise<number>
// ... private methods
}---
### plugin/lib/analyzer.ts (~320 lines)
Purpose: Core analysis logic - model resolution, content collection, token analysis
Contains:
- ModelResolver class
- resolveTokenModel() - determines tokenizer from messages
- resolveOpenAIModel() - OpenAI-specific resolution
- resolveTransformersModel() - Transformers-specific resolution
- Helper methods: mapOpenAI(), canonicalize()
- ContentCollector class
- collectSystemPrompts() - extracts system prompts
- collectMessageTexts() - extracts user/assistant messages
- collectToolOutputs() - extracts tool results
- collectToolCallCounts() - counts tool invocations
- collectAllToolsCalled() - lists unique tools
- collectReasoningTexts() - extracts reasoning traces
- Helper methods: extractText(), identifySystemPrompt(), capitalize()
- TokenAnalysisEngine class
- analyze() - main analysis orchestration
- buildCategory() - builds category summaries
- applyTelemetryAdjustments() - applies API telemetry data
Exports:
export class ModelResolver {
resolveTokenModel(messages: SessionMessage[]): TokenModel
}
export class ContentCollector {
collectSystemPrompts(messages: SessionMessage[]): CategoryEntrySource[]
collectMessageTexts(messages: SessionMessage[], role: "user" | "assistant"): CategoryEntrySource[]
collectToolOutputs(messages: SessionMessage[]): CategoryEntrySource[]
collectToolCallCounts(messages: SessionMessage[]): Map<string, number>
collectAllToolsCalled(messages: SessionMessage[]): string[]
collectReasoningTexts(messages: SessionMessage[]): CategoryEntrySource[]
}
export class TokenAnalysisEngine {
constructor(tokenizerManager: TokenizerManager, contentCollector: ContentCollector)
async analyze(sessionID: string, messages: SessionMessage[], tokenModel: TokenModel, entryLimit: number): Promise<TokenAnalysis>
}
---
### plugin/lib/formatter.ts (~510 lines)
Purpose: Cost calculation, subagent analysis, and output formatting
Contains:
- CostCalculator class
- calculateCost() - computes cost estimate
- getPricing() - looks up model pricing
- normalizeModelName() - handles provider/model format
- SubagentAnalyzer class
- analyzeChildSessions() - recursive child session analysis
- analyzeChildSession() - single child analysis
- extractAgentType() - parses agent type from title
- getPricing() - cost calculation for subagents
- OutputFormatter class
- format() - main formatting entry point
- formatVisualOutput() - generates the full report
- formatCategoryBar() - creates ASCII bar charts
- collectTopEntries() - finds top token consumers
- formatNumber() - number formatting utility
Exports:
export class CostCalculator {
constructor(pricingData: Record<string, ModelPricing>)
calculateCost(analysis: TokenAnalysis): CostEstimate
}
export class SubagentAnalyzer {
constructor(client: any, costCalculator: CostCalculator, pricingData: Record<string, ModelPricing>)
async analyzeChildSessions(parentSessionID: string): Promise<SubagentAnalysis>
}
export class OutputFormatter {
constructor(costCalculator: CostCalculator)
format(analysis: TokenAnalysis): string
}
---
## 3. Dependency Graph
tokenscope.ts (main entry)
│
├── lib/types.ts (no dependencies)
│
├── lib/config.ts
│ └── imports: lib/types.ts
│
├── lib/tokenizer.ts
│ └── imports: lib/types.ts, lib/config.ts
│
├── lib/analyzer.ts
│ └── imports: lib/types.ts, lib/config.ts, lib/tokenizer.ts
│
└── lib/formatter.ts
└── imports: lib/types.tsImport Order (no circular dependencies):
1. types.ts - standalone, no imports from lib
2. config.ts - imports only types.ts
3. tokenizer.ts - imports types.ts, config.ts
4. analyzer.ts - imports types.ts, config.ts, tokenizer.ts
5. formatter.ts - imports types.ts
6. tokenscope.ts - imports all of the above
---
## 4. Install.sh Modifications
### Changes Required to install.sh:
#### Change 1: Add lib directory creation (line ~88)
# Before:
mkdir -p "$OPENCODE_DIR/plugin"
mkdir -p "$OPENCODE_DIR/command"
# After:
mkdir -p "$OPENCODE_DIR/plugin"
mkdir -p "$OPENCODE_DIR/plugin/lib" # NEW
mkdir -p "$OPENCODE_DIR/command"
#### Change 2: Update FILES array (lines ~95-101)
# Before:
FILES=(
"plugin/tokenscope.ts"
"plugin/models.json"
"plugin/package.json"
"plugin/install.sh"
"command/tokenscope.md"
)
# After:
FILES=(
"plugin/tokenscope.ts"
"plugin/lib/types.ts"
"plugin/lib/config.ts"
"plugin/lib/tokenizer.ts"
"plugin/lib/analyzer.ts"
"plugin/lib/formatter.ts"
"plugin/models.json"
"plugin/package.json"
"plugin/install.sh"
"command/tokenscope.md"
)
#### Change 3: Update file download logic (lines ~103-121)
# Before:
for file in "${FILES[@]}"; do
filename=$(basename "$file")
dir=$(dirname "$file")
echo_info "Downloading $filename..."
if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
# Move to appropriate directory
if [ "$dir" = "plugin" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
else
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
fi
# ...
# After:
for file in "${FILES[@]}"; do
filename=$(basename "$file")
dir=$(dirname "$file")
echo_info "Downloading $filename..."
if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
# Move to appropriate directory based on path
if [ "$dir" = "plugin" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
elif [ "$dir" = "plugin/lib" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/lib/$filename"
else
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
fi
# ...
#### Change 4: Update verification (lines ~150-155)
# Before:
REQUIRED_FILES=(
"$OPENCODE_DIR/plugin/tokenscope.ts"
"$OPENCODE_DIR/plugin/models.json"
"$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
"$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
"$OPENCODE_DIR/command/tokenscope.md"
)
# After:
REQUIRED_FILES=(
"$OPENCODE_DIR/plugin/tokenscope.ts"
"$OPENCODE_DIR/plugin/lib/types.ts"
"$OPENCODE_DIR/plugin/lib/config.ts"
"$OPENCODE_DIR/plugin/lib/tokenizer.ts"
"$OPENCODE_DIR/plugin/lib/analyzer.ts"
"$OPENCODE_DIR/plugin/lib/formatter.ts"
"$OPENCODE_DIR/plugin/models.json"
"$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
"$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
"$OPENCODE_DIR/command/tokenscope.md"
)
---
## 5. Migration Steps
### Step-by-Step Implementation Order:
1. Create plugin/lib/ directory
2. Create plugin/lib/types.ts
- Extract all interfaces (lines 12-175)
- Extract type guard functions (lines 43-53)
- No external dependencies
3. Create plugin/lib/config.ts
- Extract constants and maps (lines 7-8, 195-249)
- Extract loadModelPricing() (lines 179-193)
- Import TokenizerSpec, ModelPricing from types.ts
4. Create plugin/lib/tokenizer.ts
- Extract TokenizerManager class (lines 253-375)
- Import TokenModel from types.ts
- Import VENDOR_ROOT from config.ts
5. Create plugin/lib/analyzer.ts
- Extract ModelResolver class (lines 379-448)
- Extract ContentCollector class (lines 452-587)
- Extract TokenAnalysisEngine class (lines 591-746)
- Import types from types.ts
- Import maps from config.ts
- Import TokenizerManager from tokenizer.ts
6. Create plugin/lib/formatter.ts
- Extract CostCalculator class (lines 750-802)
- Extract SubagentAnalyzer class (lines 806-933)
- Extract OutputFormatter class (lines 938-1259)
- Import types from types.ts
7. Refactor plugin/tokenscope.ts
- Keep only the Plugin export (lines 1263-1339)
- Add imports from all lib modules
8. Update plugin/install.sh
- Apply changes from Section 4
9. Update version in plugin/package.json
- Bump to 1.3.0 (breaking change in file structure)
---
## 6. Testing Approach
### Local Testing Before Push:
# 1. Create the new file structure locally
mkdir -p ~/.config/opencode/plugin/lib
# 2. Copy all new files to test location
cp plugin/tokenscope.ts ~/.config/opencode/plugin/
cp plugin/lib/*.ts ~/.config/opencode/plugin/lib/
# 3. Restart OpenCode
# 4. Run /tokenscope and verify output matches previous behavior
### Verification Checklist:
- [ ] Plugin loads without errors
- [ ] /tokenscope command appears in command list
- [ ] Token analysis produces same output format
- [ ] All 5 categories display correctly
- [ ] Tool usage breakdown works
- [ ] Subagent analysis works (if applicable)
- [ ] Cost calculation is accurate
- [ ] File output (token-usage-output.txt) is generated
- [ ] No TypeScript compilation errors
- [ ] install.sh downloads all files correctly
- [ ] install.sh --update works correctly
### Regression Test:
Compare output before and after refactor:
# Before refactor
/tokenscope
cat token-usage-output.txt > before.txt
# After refactor (restart OpenCode)
/tokenscope
cat token-usage-output.txt > after.txt
# Compare (should be identical except timestamps)
diff before.txt after.txt
---
## 7. Summary Table
| File | Lines | Purpose |
|------|-------|---------|
| tokenscope.ts | ~80 | Plugin entry point, tool registration |
| lib/types.ts | ~165 | All interfaces and type guards |
| lib/config.ts | ~95 | Constants, model maps, pricing loader |
| lib/tokenizer.ts | ~125 | TokenizerManager - token counting |
| lib/analyzer.ts | ~320 | ModelResolver, ContentCollector, TokenAnalysisEngine |
| lib/formatter.ts | ~510 | CostCalculator, SubagentAnalyzer, OutputFormatter |
| Total | ~1295 | (vs 1340 original - slightly less due to consolidated imports) |
---
## Questions Before Implementation
Do you want me to proceed with implementing this plan? If so, I have a few questions:
1. Version bump: Should I bump to 1.3.0 (indicates structural change) or would you prefer a different version?
2. README updates: Should I update the README to reflect the new file structure in the Architecture section?
3. Start implementation now?: Would you like me to create all the files now, or would you prefer to review/modify this plan first?
testing locally will be very hard forget about that part I will test it myself. I think the plan is great but the costanalysis and subagentanalysis parts should not be in formatter.ts that is a bit weird. 1 yes bump the version. 2 Yes for sure. 3 not yet, update the step by step plan with the new instructions and also create a todo list I can review one last time before we start implementing.
You're right, having CostCalculator and SubagentAnalyzer in formatter.ts doesn't make logical sense - they're not formatting concerns. Let me revise the plan.
---
# Revised Implementation Plan (v2)
## 1. Updated File Structure
plugin/
├── tokenscope.ts # Main entry point - Plugin export only (~80 lines)
├── lib/
│ ├── types.ts # All interfaces and type definitions (~165 lines)
│ ├── config.ts # Constants, model maps, pricing loader (~95 lines)
│ ├── tokenizer.ts # TokenizerManager class (~125 lines)
│ ├── analyzer.ts # ModelResolver, ContentCollector, TokenAnalysisEngine (~320 lines)
│ ├── cost.ts # CostCalculator class (~55 lines) [NEW]
│ ├── subagent.ts # SubagentAnalyzer class (~130 lines) [NEW]
│ └── formatter.ts # OutputFormatter class only (~325 lines) [REDUCED]
├── models.json # (unchanged)
├── package.json # (version bumped to 1.3.0)
├── install.sh # (modified to download new files)
└── vendor/ # (unchanged - created by npm install)
└── node_modules/Total: 8 TypeScript files (1 main + 7 in lib/)
---
## 2. Revised Module Responsibilities
### plugin/lib/cost.ts (~55 lines) [NEW FILE]
Purpose: Cost calculation logic
Contains:
- CostCalculator class
- calculateCost() - computes cost estimate from analysis
- getPricing() - looks up model pricing from data
- normalizeModelName() - handles provider/model format
Exports:
export class CostCalculator {
constructor(pricingData: Record<string, ModelPricing>)
calculateCost(analysis: TokenAnalysis): CostEstimate
}---
### plugin/lib/subagent.ts (~130 lines) [NEW FILE]
Purpose: Child session analysis for Task tool subagents
Contains:
- SubagentAnalyzer class
- analyzeChildSessions() - recursive child session analysis
- analyzeChildSession() - single child analysis
- extractAgentType() - parses agent type from title
- getPricing() - cost calculation for subagents
Exports:
export class SubagentAnalyzer {
constructor(client: any, costCalculator: CostCalculator, pricingData: Record<string, ModelPricing>)
async analyzeChildSessions(parentSessionID: string): Promise<SubagentAnalysis>
}---
### plugin/lib/formatter.ts (~325 lines) [REDUCED]
Purpose: Output formatting only - visual report generation
Contains:
- OutputFormatter class
- format() - main formatting entry point
- formatVisualOutput() - generates the full report
- formatCategoryBar() - creates ASCII bar charts
- collectTopEntries() - finds top token consumers
- formatNumber() - number formatting utility
Exports:
export class OutputFormatter {
constructor(costCalculator: CostCalculator)
format(analysis: TokenAnalysis): string
}---
## 3. Updated Dependency Graph
tokenscope.ts (main entry)
│
├── lib/types.ts (no dependencies)
│
├── lib/config.ts
│ └── imports: lib/types.ts
│
├── lib/tokenizer.ts
│ └── imports: lib/types.ts, lib/config.ts
│
├── lib/analyzer.ts
│ └── imports: lib/types.ts, lib/config.ts, lib/tokenizer.ts
│
├── lib/cost.ts
│ └── imports: lib/types.ts
│
├── lib/subagent.ts
│ └── imports: lib/types.ts, lib/cost.ts
│
└── lib/formatter.ts
└── imports: lib/types.ts, lib/cost.tsImport Order (no circular dependencies):
1. types.ts - standalone, no imports from lib
2. config.ts - imports only types.ts
3. tokenizer.ts - imports types.ts, config.ts
4. analyzer.ts - imports types.ts, config.ts, tokenizer.ts
5. cost.ts - imports types.ts
6. subagent.ts - imports types.ts, cost.ts
7. formatter.ts - imports types.ts, cost.ts
8. tokenscope.ts - imports all of the above
---
## 4. Updated install.sh Changes
### Change 1: Add lib directory creation
mkdir -p "$OPENCODE_DIR/plugin"
mkdir -p "$OPENCODE_DIR/plugin/lib" # NEW
mkdir -p "$OPENCODE_DIR/command"### Change 2: Updated FILES array
FILES=(
"plugin/tokenscope.ts"
"plugin/lib/types.ts"
"plugin/lib/config.ts"
"plugin/lib/tokenizer.ts"
"plugin/lib/analyzer.ts"
"plugin/lib/cost.ts"
"plugin/lib/subagent.ts"
"plugin/lib/formatter.ts"
"plugin/models.json"
"plugin/package.json"
"plugin/install.sh"
"command/tokenscope.md"
)### Change 3: Updated file download logic
for file in "${FILES[@]}"; do
filename=$(basename "$file")
dir=$(dirname "$file")
echo_info "Downloading $filename..."
if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
if [ "$dir" = "plugin" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
elif [ "$dir" = "plugin/lib" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/lib/$filename"
else
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
fi
# ...### Change 4: Updated verification
REQUIRED_FILES=(
"$OPENCODE_DIR/plugin/tokenscope.ts"
"$OPENCODE_DIR/plugin/lib/types.ts"
"$OPENCODE_DIR/plugin/lib/config.ts"
"$OPENCODE_DIR/plugin/lib/tokenizer.ts"
"$OPENCODE_DIR/plugin/lib/analyzer.ts"
"$OPENCODE_DIR/plugin/lib/cost.ts"
"$OPENCODE_DIR/plugin/lib/subagent.ts"
"$OPENCODE_DIR/plugin/lib/formatter.ts"
"$OPENCODE_DIR/plugin/models.json"
"$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
"$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
"$OPENCODE_DIR/command/tokenscope.md"
)---
## 5. Step-by-Step Implementation Plan
### Phase 1: Create lib directory and types
| Step | Action | Details |
|------|--------|---------|
| 1.1 | Create plugin/lib/ directory | New directory for modules |
| 1.2 | Create plugin/lib/types.ts | Extract interfaces (lines 12-175) + type guards (lines 43-53) |
### Phase 2: Create config and tokenizer modules
| Step | Action | Details |
|------|--------|---------|
| 2.1 | Create plugin/lib/config.ts | Extract constants (lines 7-8), model maps (lines 195-249), loadModelPricing() (lines 179-193) |
| 2.2 | Create plugin/lib/tokenizer.ts | Extract TokenizerManager class (lines 253-375) |
### Phase 3: Create analyzer module
| Step | Action | Details |
|------|--------|---------|
| 3.1 | Create plugin/lib/analyzer.ts | Extract ModelResolver (lines 379-448), ContentCollector (lines 452-587), TokenAnalysisEngine (lines 591-746) |
### Phase 4: Create cost and subagent modules
| Step | Action | Details |
|------|--------|---------|
| 4.1 | Create plugin/lib/cost.ts | Extract CostCalculator class (lines 750-802) |
| 4.2 | Create plugin/lib/subagent.ts | Extract SubagentAnalyzer class (lines 806-933) |
### Phase 5: Create formatter module
| Step | Action | Details |
|------|--------|---------|
| 5.1 | Create plugin/lib/formatter.ts | Extract OutputFormatter class (lines 938-1259) |
### Phase 6: Refactor main entry point
| Step | Action | Details |
|------|--------|---------|
| 6.1 | Refactor plugin/tokenscope.ts | Keep only Plugin export (lines 1263-1339), add imports from lib modules |
### Phase 7: Update supporting files
| Step | Action | Details |
|------|--------|---------|
| 7.1 | Update plugin/install.sh | Add lib directory creation, update FILES array, update download logic, update verification |
| 7.2 | Update plugin/package.json | Bump version to 1.3.0 |
| 7.3 | Update README.md | Update Architecture section to reflect new file structure |
---
## 6. Final Implementation Todo List
Here's the todo list for your review before we start implementing:
---
## Summary Table (Revised)
| File | Lines | Purpose |
|------|-------|---------|
| tokenscope.ts | ~80 | Plugin entry point, tool registration |
| lib/types.ts | ~165 | All interfaces and type guards |
| lib/config.ts | ~95 | Constants, model maps, pricing loader |
| lib/tokenizer.ts | ~125 | TokenizerManager - token counting |
| lib/analyzer.ts | ~320 | ModelResolver, ContentCollector, TokenAnalysisEngine |
| lib/cost.ts | ~55 | CostCalculator - cost calculation |
| lib/subagent.ts | ~130 | SubagentAnalyzer - child session analysis |
| lib/formatter.ts | ~325 | OutputFormatter - visual report generation |
| Total | ~1295 | 8 files (vs 1 file with 1340 lines) |
---
The todo list above shows all 12 implementation steps. Let me know when you're ready to proceed, or if you'd like any adjustments to the plan!
Great! make sure each step that is done is git commited before going to the next step. Each git commit should be a conventional commit of max 70 chars. THIS IS VERY IMPORTANT. Please start with the plan now.
I'll start implementing the plan now, committing after each step. Let me begin.
All 2 tools executed successfully.
Keep using the batch tool for optimal performance in your next response!
On branch refactor-the-code
Untracked files:
(use "git add <file>..." to include in what will be committed)
plugin/lib/
nothing added to commit but untracked files present (use "git add" to track)
f5a9a60 Remove duplicate install.sh, keep only plugin/install.sh
bc33e37 Bump version to 1.2.1
93bd925 Clarify update instructions for different versions
9861778 Document local update script in README
c7ccd31 Add install.sh to plugin folder for easier updates
[refactor-the-code 0c28f19] refactor: add plugin/lib directory for multi-file structure
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 plugin/lib/.gitkeep
Now creating lib/types.ts with all interfaces and type guards:
// Types and interfaces for the tokenscope plugin
export interface SessionMessage {
info: SessionMessageInfo
parts: SessionMessagePart[]
}
export interface SessionMessageInfo {
id: string
role: string
modelID?: string
providerID?: string
system?: string[]
tokens?: TokenUsage
cost?: number
}
export interface TokenUsage {
input?: number
output?: number
reasoning?: number
cache?: {
read?: number
write?: number
}
}
export type SessionMessagePart =
| { type: "text"; text: string; synthetic?: boolean }
| { type: "reasoning"; text: string }
| { type: "tool"; tool: string; state: ToolState }
| { type: string; [key: string]: unknown }
export interface ToolState {
status: "pending" | "running" | "completed" | "error"
output?: string
}
export interface CategoryEntry {
label: string
tokens: number
}
export interface CategorySummary {
label: string
totalTokens: number
entries: CategoryEntry[]
allEntries: CategoryEntry[]
}
export interface TokenAnalysis {
sessionID: string
model: TokenModel
categories: {
system: CategorySummary
user: CategorySummary
assistant: CategorySummary
tools: CategorySummary
reasoning: CategorySummary
}
totalTokens: number
inputTokens: number
outputTokens: number
reasoningTokens: number
cacheReadTokens: number
cacheWriteTokens: number
assistantMessageCount: number
mostRecentInput: number
mostRecentOutput: number
mostRecentReasoning: number
mostRecentCacheRead: number
mostRecentCacheWrite: number
sessionCost: number
mostRecentCost: number
allToolsCalled: string[]
toolCallCounts: Map<string, number>
subagentAnalysis?: SubagentAnalysis
}
export interface TokenModel {
name: string
spec: TokenizerSpec
}
export type TokenizerSpec =
| { kind: "tiktoken"; model: string }
| { kind: "transformers"; hub: string }
| { kind: "approx" }
export interface CategoryEntrySource {
label: string
content: string
}
export interface CostEstimate {
isSubscription: boolean
apiSessionCost: number
apiMostRecentCost: number
estimatedSessionCost: number
estimatedInputCost: number
estimatedOutputCost: number
estimatedCacheReadCost: number
estimatedCacheWriteCost: number
pricePerMillionInput: number
pricePerMillionOutput: number
pricePerMillionCacheRead: number
pricePerMillionCacheWrite: number
inputTokens: number
outputTokens: number
reasoningTokens: number
cacheReadTokens: number
cacheWriteTokens: number
}
export interface SubagentSummary {
sessionID: string
title: string
agentType: string
inputTokens: number
outputTokens: number
reasoningTokens: number
cacheReadTokens: number
cacheWriteTokens: number
totalTokens: number
apiCost: number
estimatedCost: number
assistantMessageCount: number
}
export interface SubagentAnalysis {
subagents: SubagentSummary[]
totalInputTokens: number
totalOutputTokens: number
totalReasoningTokens: number
totalCacheReadTokens: number
totalCacheWriteTokens: number
totalTokens: number
totalApiCost: number
totalEstimatedCost: number
totalApiCalls: number
}
export interface ModelPricing {
input: number
output: number
cacheWrite: number
cacheRead: number
}
export interface ChildSession {
id: string
title: string
parentID?: string
}
// Type guards
export function isToolPart(part: SessionMessagePart): part is { type: "tool"; tool: string; state: ToolState } {
return part.type === "tool"
}
export function isReasoningPart(part: SessionMessagePart): part is { type: "reasoning"; text: string } {
return part.type === "reasoning"
}
export function isTextPart(part: SessionMessagePart): part is { type: "text"; text: string; synthetic?: boolean } {
return part.type === "text"
}
[refactor-the-code f95be04] refactor: extract types and interfaces to lib/types.ts
1 file changed, 168 insertions(+)
create mode 100644 plugin/lib/types.ts
Now creating lib/config.ts:
// Configuration constants, model maps, and pricing loader
import path from "path"
import fs from "fs/promises"
import { fileURLToPath } from "url"
import type { TokenizerSpec, ModelPricing } from "./types"
export const DEFAULT_ENTRY_LIMIT = 3
export const VENDOR_ROOT = path.join(path.dirname(fileURLToPath(import.meta.url)), "..", "vendor", "node_modules")
// Pricing cache
let PRICING_CACHE: Record<string, ModelPricing> | null = null
export async function loadModelPricing(): Promise<Record<string, ModelPricing>> {
if (PRICING_CACHE) return PRICING_CACHE
try {
const modelsPath = path.join(path.dirname(fileURLToPath(import.meta.url)), "..", "models.json")
const data = await fs.readFile(modelsPath, "utf8")
PRICING_CACHE = JSON.parse(data)
return PRICING_CACHE!
} catch {
PRICING_CACHE = { default: { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 } }
return PRICING_CACHE
}
}
// OpenAI model mapping for tiktoken
export const OPENAI_MODEL_MAP: Record<string, string> = {
"gpt-5": "gpt-4o",
"o4-mini": "gpt-4o",
"o3": "gpt-4o",
"o3-mini": "gpt-4o",
"o1": "gpt-4o",
"o1-pro": "gpt-4o",
"gpt-4.1": "gpt-4o",
"gpt-4.1-mini": "gpt-4o",
"gpt-4o": "gpt-4o",
"gpt-4o-mini": "gpt-4o-mini",
"gpt-4-turbo": "gpt-4",
"gpt-4": "gpt-4",
"gpt-3.5-turbo": "gpt-3.5-turbo",
"text-embedding-3-large": "text-embedding-3-large",
"text-embedding-3-small": "text-embedding-3-small",
"text-embedding-ada-002": "text-embedding-ada-002",
}
// Transformers model mapping for HuggingFace tokenizers
export const TRANSFORMERS_MODEL_MAP: Record<string, string> = {
"claude-opus-4": "Xenova/claude-tokenizer",
"claude-sonnet-4": "Xenova/claude-tokenizer",
"claude-3.7-sonnet": "Xenova/claude-tokenizer",
"claude-3.5-sonnet": "Xenova/claude-tokenizer",
"claude-3.5-haiku": "Xenova/claude-tokenizer",
"claude-3-opus": "Xenova/claude-tokenizer",
"claude-3-sonnet": "Xenova/claude-tokenizer",
"claude-3-haiku": "Xenova/claude-tokenizer",
"claude-2.1": "Xenova/claude-tokenizer",
"claude-2.0": "Xenova/claude-tokenizer",
"claude-instant-1.2": "Xenova/claude-tokenizer",
"llama-4": "Xenova/llama4-tokenizer",
"llama-3.3": "unsloth/Llama-3.3-70B-Instruct",
"llama-3.2": "Xenova/Llama-3.2-Tokenizer",
"llama-3.1": "Xenova/Meta-Llama-3.1-Tokenizer",
"llama-3": "Xenova/llama3-tokenizer-new",
"llama-2": "Xenova/llama2-tokenizer",
"code-llama": "Xenova/llama-code-tokenizer",
"deepseek-r1": "deepseek-ai/DeepSeek-R1",
"deepseek-v3": "deepseek-ai/DeepSeek-V3",
"deepseek-v2": "deepseek-ai/DeepSeek-V2",
"mistral-large": "Xenova/mistral-tokenizer-v3",
"mistral-small": "Xenova/mistral-tokenizer-v3",
"mistral-nemo": "Xenova/Mistral-Nemo-Instruct-Tokenizer",
"devstral-small": "Xenova/Mistral-Nemo-Instruct-Tokenizer",
"codestral": "Xenova/mistral-tokenizer-v3",
}
// Provider default tokenizers
export const PROVIDER_DEFAULTS: Record<string, TokenizerSpec> = {
anthropic: { kind: "transformers", hub: "Xenova/claude-tokenizer" },
meta: { kind: "transformers", hub: "Xenova/Meta-Llama-3.1-Tokenizer" },
mistral: { kind: "transformers", hub: "Xenova/mistral-tokenizer-v3" },
deepseek: { kind: "transformers", hub: "deepseek-ai/DeepSeek-V3" },
google: { kind: "transformers", hub: "google/gemma-2-9b-it" },
}
[refactor-the-code b244d3a] refactor: extract config and model maps to lib/config.ts
1 file changed, 85 insertions(+)
create mode 100644 plugin/lib/config.ts
Now creating lib/tokenizer.ts:
// TokenizerManager - handles token counting with multiple backends
import path from "path"
import fs from "fs/promises"
import { pathToFileURL } from "url"
import type { TokenModel } from "./types"
import { VENDOR_ROOT } from "./config"
export class TokenizerManager {
private tiktokenCache = new Map<string, any>()
private transformerCache = new Map<string, any>()
private tiktokenModule?: Promise<any>
private transformersModule?: Promise<any>
async countTokens(content: string, model: TokenModel): Promise<number> {
if (!content.trim()) return 0
try {
switch (model.spec.kind) {
case "approx":
return this.approximateTokenCount(content)
case "tiktoken":
return await this.countWithTiktoken(content, model.spec.model)
case "transformers":
return await this.countWithTransformers(content, model.spec.hub)
}
} catch (error) {
console.error(`Token counting error for ${model.name}:`, error)
return this.approximateTokenCount(content)
}
}
private approximateTokenCount(content: string): number {
return Math.ceil(content.length / 4)
}
private async countWithTiktoken(content: string, model: string): Promise<number> {
const encoder = await this.loadTiktokenEncoder(model)
try {
return encoder.encode(content).length
} catch {
return this.approximateTokenCount(content)
}
}
private async countWithTransformers(content: string, hub: string): Promise<number> {
const tokenizer = await this.loadTransformersTokenizer(hub)
if (!tokenizer || typeof tokenizer.encode !== "function") {
return this.approximateTokenCount(content)
}
try {
const encoding = await tokenizer.encode(content)
return Array.isArray(encoding) ? encoding.length : (encoding?.length ?? this.approximateTokenCount(content))
} catch {
return this.approximateTokenCount(content)
}
}
private async loadTiktokenEncoder(model: string) {
if (this.tiktokenCache.has(model)) {
return this.tiktokenCache.get(model)
}
const mod = await this.loadTiktokenModule()
const encodingForModel = mod.encodingForModel ?? mod.default?.encodingForModel
const getEncoding = mod.getEncoding ?? mod.default?.getEncoding
if (typeof getEncoding !== "function") {
return { encode: (text: string) => ({ length: Math.ceil(text.length / 4) }) }
}
let encoder
try {
encoder = encodingForModel(model)
} catch {
encoder = getEncoding("cl100k_base")
}
this.tiktokenCache.set(model, encoder)
return encoder
}
private async loadTiktokenModule() {
if (!this.tiktokenModule) {
this.tiktokenModule = this.importFromVendor("js-tiktoken")
}
return this.tiktokenModule
}
private async loadTransformersTokenizer(hub: string) {
if (this.transformerCache.has(hub)) {
return this.transformerCache.get(hub)
}
try {
const { AutoTokenizer } = await this.loadTransformersModule()
const tokenizer = await AutoTokenizer.from_pretrained(hub)
this.transformerCache.set(hub, tokenizer)
return tokenizer
} catch {
this.transformerCache.set(hub, null)
return null
}
}
private async loadTransformersModule() {
if (!this.transformersModule) {
this.transformersModule = this.importFromVendor("@huggingface/transformers")
}
return this.transformersModule
}
private async importFromVendor(pkg: string) {
const pkgJsonPath = path.join(VENDOR_ROOT, pkg, "package.json")
let data: string
try {
data = await fs.readFile(pkgJsonPath, "utf8")
} catch {
throw new Error(
`Token analyzer dependencies missing. Run the install.sh script to install vendor tokenizers.\n` +
`Expected path: ${pkgJsonPath}`
)
}
const manifest = JSON.parse(data)
const entry = manifest.module ?? manifest.main ?? "index.js"
const entryPath = path.join(VENDOR_ROOT, pkg, entry)
return import(pathToFileURL(entryPath).href)
}
}
[refactor-the-code b8a8753] refactor: extract TokenizerManager to lib/tokenizer.ts
1 file changed, 131 insertions(+)
create mode 100644 plugin/lib/tokenizer.ts
Now creating lib/analyzer.ts with ModelResolver, ContentCollector, and TokenAnalysisEngine:
// Analysis classes - ModelResolver, ContentCollector, TokenAnalysisEngine
import type {
SessionMessage,
SessionMessagePart,
TokenModel,
TokenAnalysis,
CategoryEntrySource,
CategoryEntry,
CategorySummary,
isToolPart,
isReasoningPart,
isTextPart,
} from "./types"
import { isToolPart as toolGuard, isReasoningPart as reasoningGuard, isTextPart as textGuard } from "./types"
import { OPENAI_MODEL_MAP, TRANSFORMERS_MODEL_MAP, PROVIDER_DEFAULTS } from "./config"
import { TokenizerManager } from "./tokenizer"
// Model Resolution
export class ModelResolver {
resolveTokenModel(messages: SessionMessage[]): TokenModel {
for (const message of [...messages].reverse()) {
const modelID = this.canonicalize(message.info.modelID)
const providerID = this.canonicalize(message.info.providerID)
const openaiModel = this.resolveOpenAIModel(modelID, providerID)
if (openaiModel) return openaiModel
const transformerModel = this.resolveTransformersModel(modelID, providerID)
if (transformerModel) return transformerModel
}
return { name: "approx", spec: { kind: "approx" } }
}
private resolveOpenAIModel(modelID?: string, providerID?: string): TokenModel | undefined {
if (providerID === "openai" || providerID === "opencode" || providerID === "azure") {
const mapped = this.mapOpenAI(modelID)
return { name: modelID ?? mapped, spec: { kind: "tiktoken", model: mapped } }
}
if (modelID && OPENAI_MODEL_MAP[modelID]) {
return { name: modelID, spec: { kind: "tiktoken", model: OPENAI_MODEL_MAP[modelID] } }
}
return undefined
}
private resolveTransformersModel(modelID?: string, providerID?: string): TokenModel | undefined {
if (modelID && TRANSFORMERS_MODEL_MAP[modelID]) {
return { name: modelID, spec: { kind: "transformers", hub: TRANSFORMERS_MODEL_MAP[modelID] } }
}
if (providerID && PROVIDER_DEFAULTS[providerID]) {
return { name: modelID ?? providerID, spec: PROVIDER_DEFAULTS[providerID] }
}
// Prefix-based fallbacks
if (modelID?.startsWith("claude")) {
return { name: modelID, spec: { kind: "transformers", hub: "Xenova/claude-tokenizer" } }
}
if (modelID?.startsWith("llama")) {
return {
name: modelID,
spec: { kind: "transformers", hub: TRANSFORMERS_MODEL_MAP[modelID] ?? "Xenova/Meta-Llama-3.1-Tokenizer" },
}
}
if (modelID?.startsWith("mistral")) {
return { name: modelID, spec: { kind: "transformers", hub: "Xenova/mistral-tokenizer-v3" } }
}
if (modelID?.startsWith("deepseek")) {
return { name: modelID, spec: { kind: "transformers", hub: "deepseek-ai/DeepSeek-V3" } }
}
return undefined
}
private mapOpenAI(modelID?: string): string {
if (!modelID) return "cl100k_base"
return OPENAI_MODEL_MAP[modelID] ?? modelID
}
private canonicalize(value?: string): string | undefined {
return value?.split("/").pop()?.toLowerCase().trim()
}
}
// Content Collection
export class ContentCollector {
collectSystemPrompts(messages: SessionMessage[]): CategoryEntrySource[] {
const prompts = new Map<string, string>()
for (const message of messages) {
if (message.info.role === "system") {
const content = this.extractText(message.parts)
if (content) prompts.set(content, content)
}
if (message.info.role === "assistant") {
for (const prompt of message.info.system ?? []) {
const trimmed = (prompt ?? "").trim()
if (trimmed) prompts.set(trimmed, trimmed)
}
}
}
return Array.from(prompts.values()).map((content, index) => ({
label: this.identifySystemPrompt(content, index + 1),
content,
}))
}
collectMessageTexts(messages: SessionMessage[], role: "user" | "assistant"): CategoryEntrySource[] {
const results: CategoryEntrySource[] = []
let index = 0
for (const message of messages) {
if (message.info.role !== role) continue
const content = this.extractText(message.parts)
if (!content) continue
index += 1
results.push({ label: `${this.capitalize(role)}#${index}`, content })
}
return results
}
collectToolOutputs(messages: SessionMessage[]): CategoryEntrySource[] {
const toolOutputs = new Map<string, string>()
for (const message of messages) {
for (const part of message.parts) {
if (!toolGuard(part)) continue
if (part.state.status !== "completed") continue
const output = (part.state.output ?? "").toString().trim()
if (!output) continue
const toolName = part.tool || "tool"
const existing = toolOutputs.get(toolName) || ""
toolOutputs.set(toolName, existing + (existing ? "\n\n" : "") + output)
}
}
return Array.from(toolOutputs.entries()).map(([toolName, content]) => ({
label: toolName,
content,
}))
}
collectToolCallCounts(messages: SessionMessage[]): Map<string, number> {
const toolCounts = new Map<string, number>()
for (const message of messages) {
for (const part of message.parts) {
if (!toolGuard(part)) continue
const toolName = part.tool || "tool"
if (toolName) {
toolCounts.set(toolName, (toolCounts.get(toolName) || 0) + 1)
}
}
}
return toolCounts
}
collectAllToolsCalled(messages: SessionMessage[]): string[] {
return Array.from(this.collectToolCallCounts(messages).keys()).sort()
}
collectReasoningTexts(messages: SessionMessage[]): CategoryEntrySource[] {
const results: CategoryEntrySource[] = []
let index = 0
for (const message of messages) {
for (const part of message.parts) {
if (!reasoningGuard(part)) continue
const text = (part.text ?? "").toString().trim()
if (!text) continue
index += 1
results.push({ label: `Reasoning#${index}`, content: text })
}
}
return results
}
private extractText(parts: SessionMessagePart[]): string {
return parts
.filter(textGuard)
.map((part) => part.text ?? "")
.map((text) => text.trim())
.filter(Boolean)
.join("\n\n")
}
private identifySystemPrompt(content: string, index: number): string {
const lower = content.toLowerCase()
if (lower.includes("opencode") && lower.includes("cli") && content.length > 500) return "System#MainPrompt"
if (lower.includes("opencode") && lower.includes("cli") && content.length <= 500) return "System#ShortPrompt"
if (lower.includes("agent") && lower.includes("mode")) return "System#AgentMode"
if (lower.includes("permission") || lower.includes("allowed") || lower.includes("deny")) return "System#Permissions"
if (lower.includes("tool") && (lower.includes("rule") || lower.includes("guideline"))) return "System#ToolRules"
if (lower.includes("format") || lower.includes("style") || lower.includes("concise")) return "System#Formatting"
if (lower.includes("project") || lower.includes("repository") || lower.includes("codebase"))
return "System#ProjectContext"
if (lower.includes("session") || lower.includes("context") || lower.includes("memory")) return "System#SessionMgmt"
if (content.includes("@") && (content.includes(".md") || content.includes(".txt"))) return "System#FileRefs"
if (content.includes("name:") && content.includes("description:")) return "System#AgentDef"
if (lower.includes("code") && (lower.includes("convention") || lower.includes("standard")))
return "System#CodeGuidelines"
return `System#${index}`
}
private capitalize(value: string): string {
if (!value) return value
return value[0].toUpperCase() + value.slice(1)
}
}
// Token Analysis Engine
export class TokenAnalysisEngine {
constructor(
private tokenizerManager: TokenizerManager,
private contentCollector: ContentCollector
) {}
async analyze(
sessionID: string,
messages: SessionMessage[],
tokenModel: TokenModel,
entryLimit: number
): Promise<TokenAnalysis> {
const systemPrompts = this.contentCollector.collectSystemPrompts(messages)
const userTexts = this.contentCollector.collectMessageTexts(messages, "user")
const assistantTexts = this.contentCollector.collectMessageTexts(messages, "assistant")
const toolOutputs = this.contentCollector.collectToolOutputs(messages)
const reasoningTraces = this.contentCollector.collectReasoningTexts(messages)
const allToolsCalled = this.contentCollector.collectAllToolsCalled(messages)
const toolCallCounts = this.contentCollector.collectToolCallCounts(messages)
const [system, user, assistant, tools, reasoning] = await Promise.all([
this.buildCategory("system", systemPrompts, tokenModel, entryLimit),
this.buildCategory("user", userTexts, tokenModel, entryLimit),
this.buildCategory("assistant", assistantTexts, tokenModel, entryLimit),
this.buildCategory("tools", toolOutputs, tokenModel, entryLimit),
this.buildCategory("reasoning", reasoningTraces, tokenModel, entryLimit),
])
const analysis: TokenAnalysis = {
sessionID,
model: tokenModel,
categories: { system, user, assistant, tools, reasoning },
totalTokens:
system.totalTokens + user.totalTokens + assistant.totalTokens + tools.totalTokens + reasoning.totalTokens,
inputTokens: 0,
outputTokens: 0,
reasoningTokens: 0,
cacheReadTokens: 0,
cacheWriteTokens: 0,
assistantMessageCount: 0,
mostRecentInput: 0,
mostRecentOutput: 0,
mostRecentReasoning: 0,
mostRecentCacheRead: 0,
mostRecentCacheWrite: 0,
sessionCost: 0,
mostRecentCost: 0,
allToolsCalled,
toolCallCounts,
}
this.applyTelemetryAdjustments(analysis, messages)
return analysis
}
private async buildCategory(
label: string,
sources: CategoryEntrySource[],
model: TokenModel,
entryLimit: number
): Promise<CategorySummary> {
const entries: CategoryEntry[] = []
for (const source of sources) {
const tokens = await this.tokenizerManager.countTokens(source.content, model)
if (tokens > 0) {
entries.push({ label: source.label, tokens })
}
}
entries.sort((a, b) => b.tokens - a.tokens)
const limited = entries.slice(0, entryLimit)
const totalTokens = entries.reduce((sum, entry) => sum + entry.tokens, 0)
return { label, totalTokens, entries: limited, allEntries: entries }
}
private applyTelemetryAdjustments(analysis: TokenAnalysis, messages: SessionMessage[]) {
const assistants = messages
.filter((m) => m.info.role === "assistant" && (m.info?.tokens || m.info?.cost !== undefined))
.map((m) => ({ msg: m, tokens: m.info.tokens, cost: m.info.cost ?? 0 }))
let totalInput = 0,
totalOutput = 0,
totalReasoning = 0
let totalCacheRead = 0,
totalCacheWrite = 0,
totalCost = 0
for (const { tokens, cost } of assistants) {
if (tokens) {
totalInput += Number(tokens.input) || 0
totalOutput += Number(tokens.output) || 0
totalReasoning += Number(tokens.reasoning) || 0
totalCacheRead += Number(tokens.cache?.read) || 0
totalCacheWrite += Number(tokens.cache?.write) || 0
}
totalCost += Number(cost) || 0
}
const mostRecentWithUsage = [...assistants]
.reverse()
.find(
({ tokens }) =>
tokens &&
(Number(tokens.input) || 0) +
(Number(tokens.output) || 0) +
(Number(tokens.reasoning) || 0) +
(Number(tokens.cache?.read) || 0) +
(Number(tokens.cache?.write) || 0) >
0
) ?? assistants[assistants.length - 1]
let mostRecentInput = 0,
mostRecentOutput = 0,
mostRecentReasoning = 0
let mostRecentCacheRead = 0,
mostRecentCacheWrite = 0,
mostRecentCost = 0
if (mostRecentWithUsage) {
const t = mostRecentWithUsage.tokens
if (t) {
mostRecentInput = Number(t.input) || 0
mostRecentOutput = Number(t.output) || 0
mostRecentReasoning = Number(t.reasoning) || 0
mostRecentCacheRead = Number(t.cache?.read) || 0
mostRecentCacheWrite = Number(t.cache?.write) || 0
}
mostRecentCost = Number(mostRecentWithUsage.cost) || 0
}
analysis.inputTokens = totalInput
analysis.outputTokens = totalOutput
analysis.reasoningTokens = totalReasoning
analysis.cacheReadTokens = totalCacheRead
analysis.cacheWriteTokens = totalCacheWrite
analysis.assistantMessageCount = assistants.length
analysis.sessionCost = totalCost
analysis.mostRecentCost = mostRecentCost
analysis.mostRecentInput = mostRecentInput
analysis.mostRecentOutput = mostRecentOutput
analysis.mostRecentReasoning = mostRecentReasoning
analysis.mostRecentCacheRead = mostRecentCacheRead
analysis.mostRecentCacheWrite = mostRecentCacheWrite
const recentApiInputTotal = mostRecentInput + mostRecentCacheRead
const localUserAndTools = analysis.categories.user.totalTokens + analysis.categories.tools.totalTokens
const inferredSystemTokens = Math.max(0, recentApiInputTotal - localUserAndTools)
if (inferredSystemTokens > 0 && analysis.categories.system.totalTokens === 0) {
analysis.categories.system.totalTokens = inferredSystemTokens
analysis.categories.system.entries = [{ label: "System (inferred from API)", tokens: inferredSystemTokens }]
analysis.categories.system.allEntries = analysis.categories.system.entries
}
analysis.totalTokens =
analysis.categories.system.totalTokens +
analysis.categories.user.totalTokens +
analysis.categories.assistant.totalTokens +
analysis.categories.tools.totalTokens +
analysis.categories.reasoning.totalTokens
}
}
[refactor-the-code 1dc5297] refactor: extract analysis classes to lib/analyzer.ts
1 file changed, 399 insertions(+)
create mode 100644 plugin/lib/analyzer.ts
Now creating lib/cost.ts:
// CostCalculator - calculates costs from token analysis
import type { TokenAnalysis, CostEstimate, ModelPricing } from "./types"
export class CostCalculator {
constructor(private pricingData: Record<string, ModelPricing>) {}
calculateCost(analysis: TokenAnalysis): CostEstimate {
const pricing = this.getPricing(analysis.model.name)
const hasActivity = analysis.assistantMessageCount > 0 && (analysis.inputTokens > 0 || analysis.outputTokens > 0)
const isSubscription = hasActivity && analysis.sessionCost === 0
const estimatedInputCost = (analysis.inputTokens / 1_000_000) * pricing.input
const estimatedOutputCost = ((analysis.outputTokens + analysis.reasoningTokens) / 1_000_000) * pricing.output
const estimatedCacheReadCost = (analysis.cacheReadTokens / 1_000_000) * pricing.cacheRead
const estimatedCacheWriteCost = (analysis.cacheWriteTokens / 1_000_000) * pricing.cacheWrite
const estimatedSessionCost =
estimatedInputCost + estimatedOutputCost + estimatedCacheReadCost + estimatedCacheWriteCost
return {
isSubscription,
apiSessionCost: analysis.sessionCost,
apiMostRecentCost: analysis.mostRecentCost,
estimatedSessionCost,
estimatedInputCost,
estimatedOutputCost,
estimatedCacheReadCost,
estimatedCacheWriteCost,
pricePerMillionInput: pricing.input,
pricePerMillionOutput: pricing.output,
pricePerMillionCacheRead: pricing.cacheRead,
pricePerMillionCacheWrite: pricing.cacheWrite,
inputTokens: analysis.inputTokens,
outputTokens: analysis.outputTokens,
reasoningTokens: analysis.reasoningTokens,
cacheReadTokens: analysis.cacheReadTokens,
cacheWriteTokens: analysis.cacheWriteTokens,
}
}
getPricing(modelName: string): ModelPricing {
const normalizedName = this.normalizeModelName(modelName)
if (this.pricingData[normalizedName]) return this.pricingData[normalizedName]
const lowerModel = normalizedName.toLowerCase()
for (const [key, pricing] of Object.entries(this.pricingData)) {
if (lowerModel.startsWith(key.toLowerCase())) return pricing
}
return this.pricingData["default"] || { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 }
}
private normalizeModelName(modelName: string): string {
return modelName.includes("/") ? modelName.split("/").pop() || modelName : modelName
}
}
[refactor-the-code 22fa67d] refactor: extract CostCalculator to lib/cost.ts
1 file changed, 57 insertions(+)
create mode 100644 plugin/lib/cost.ts
Now creating lib/subagent.ts:
// SubagentAnalyzer - analyzes child sessions from Task tool calls
import type { SessionMessage, SubagentSummary, SubagentAnalysis, ChildSession, ModelPricing } from "./types"
import { CostCalculator } from "./cost"
export class SubagentAnalyzer {
constructor(
private client: any,
private costCalculator: CostCalculator,
private pricingData: Record<string, ModelPricing>
) {}
async analyzeChildSessions(parentSessionID: string): Promise<SubagentAnalysis> {
const result: SubagentAnalysis = {
subagents: [],
totalInputTokens: 0,
totalOutputTokens: 0,
totalReasoningTokens: 0,
totalCacheReadTokens: 0,
totalCacheWriteTokens: 0,
totalTokens: 0,
totalApiCost: 0,
totalEstimatedCost: 0,
totalApiCalls: 0,
}
try {
const childrenResponse = await this.client.session.children({ path: { id: parentSessionID } })
const children: ChildSession[] = ((childrenResponse as any)?.data ?? childrenResponse ?? []) as ChildSession[]
if (!Array.isArray(children) || children.length === 0) return result
for (const child of children) {
const summary = await this.analyzeChildSession(child)
if (summary) {
result.subagents.push(summary)
result.totalInputTokens += summary.inputTokens
result.totalOutputTokens += summary.outputTokens
result.totalReasoningTokens += summary.reasoningTokens
result.totalCacheReadTokens += summary.cacheReadTokens
result.totalCacheWriteTokens += summary.cacheWriteTokens
result.totalTokens += summary.totalTokens
result.totalApiCost += summary.apiCost
result.totalEstimatedCost += summary.estimatedCost
result.totalApiCalls += summary.assistantMessageCount
}
const nestedAnalysis = await this.analyzeChildSessions(child.id)
for (const nested of nestedAnalysis.subagents) {
result.subagents.push(nested)
}
result.totalInputTokens += nestedAnalysis.totalInputTokens
result.totalOutputTokens += nestedAnalysis.totalOutputTokens
result.totalReasoningTokens += nestedAnalysis.totalReasoningTokens
result.totalCacheReadTokens += nestedAnalysis.totalCacheReadTokens
result.totalCacheWriteTokens += nestedAnalysis.totalCacheWriteTokens
result.totalTokens += nestedAnalysis.totalTokens
result.totalApiCost += nestedAnalysis.totalApiCost
result.totalEstimatedCost += nestedAnalysis.totalEstimatedCost
result.totalApiCalls += nestedAnalysis.totalApiCalls
}
} catch (error) {
console.error(`Failed to fetch child sessions for ${parentSessionID}:`, error)
}
return result
}
private async analyzeChildSession(child: ChildSession): Promise<SubagentSummary | null> {
try {
const messagesResponse = await this.client.session.messages({ path: { id: child.id } })
const messages: SessionMessage[] = ((messagesResponse as any)?.data ?? messagesResponse ?? []) as SessionMessage[]
if (!Array.isArray(messages) || messages.length === 0) return null
const agentType = this.extractAgentType(child.title)
let inputTokens = 0,
outputTokens = 0,
reasoningTokens = 0
let cacheReadTokens = 0,
cacheWriteTokens = 0
let apiCost = 0,
assistantMessageCount = 0,
modelName = "unknown"
for (const message of messages) {
if (message.info.role === "assistant") {
assistantMessageCount++
const tokens = message.info.tokens
if (tokens) {
inputTokens += Number(tokens.input) || 0
outputTokens += Number(tokens.output) || 0
reasoningTokens += Number(tokens.reasoning) || 0
cacheReadTokens += Number(tokens.cache?.read) || 0
cacheWriteTokens += Number(tokens.cache?.write) || 0
}
apiCost += Number(message.info.cost) || 0
if (message.info.modelID) modelName = message.info.modelID
}
}
const totalTokens = inputTokens + outputTokens + reasoningTokens + cacheReadTokens + cacheWriteTokens
const pricing = this.getPricing(modelName)
const estimatedCost =
(inputTokens / 1_000_000) * pricing.input +
((outputTokens + reasoningTokens) / 1_000_000) * pricing.output +
(cacheReadTokens / 1_000_000) * pricing.cacheRead +
(cacheWriteTokens / 1_000_000) * pricing.cacheWrite
return {
sessionID: child.id,
title: child.title,
agentType,
inputTokens,
outputTokens,
reasoningTokens,
cacheReadTokens,
cacheWriteTokens,
totalTokens,
apiCost,
estimatedCost,
assistantMessageCount,
}
} catch (error) {
console.error(`Failed to analyze child session ${child.id}:`, error)
return null
}
}
private extractAgentType(title: string): string {
const match = title.match(/@(\w+)\s+subagent/i)
if (match) return match[1]
const words = title.split(/\s+/)
return words[0]?.toLowerCase() || "subagent"
}
private getPricing(modelName: string): ModelPricing {
const normalizedName = modelName.includes("/") ? modelName.split("/").pop() || modelName : modelName
if (this.pricingData[normalizedName]) return this.pricingData[normalizedName]
const lowerModel = normalizedName.toLowerCase()
for (const [key, pricing] of Object.entries(this.pricingData)) {
if (lowerModel.startsWith(key.toLowerCase())) return pricing
}
return this.pricingData["default"] || { input: 1, output: 3, cacheWrite: 0, cacheRead: 0 }
}
}
[refactor-the-code a386101] refactor: extract SubagentAnalyzer to lib/subagent.ts
1 file changed, 148 insertions(+)
create mode 100644 plugin/lib/subagent.ts
Now creating lib/formatter.ts:
// OutputFormatter - generates visual reports from token analysis
import type { TokenAnalysis, CategoryEntry, CostEstimate, SubagentAnalysis } from "./types"
import { CostCalculator } from "./cost"
export class OutputFormatter {
private readonly BAR_WIDTH = 30
private readonly TOKEN_SPACING = 11
private readonly CATEGORY_LABEL_WIDTH = 9
private readonly TOOL_LABEL_WIDTH = 20
private readonly TOP_CONTRIBUTOR_LABEL_WIDTH = 30
constructor(private costCalculator: CostCalculator) {}
private formatCategoryBar(
label: string,
tokens: number,
total: number,
labelWidth: number = this.CATEGORY_LABEL_WIDTH
): string {
if (tokens === 0) return ""
const percentage = total > 0 ? ((tokens / total) * 100).toFixed(1) : "0.0"
const percentageNum = parseFloat(percentage)
const barWidth = Math.round((percentageNum / 100) * this.BAR_WIDTH)
const bar = "\u2588".repeat(barWidth) + "\u2591".repeat(Math.max(0, this.BAR_WIDTH - barWidth))
const labelPadded = label.padEnd(labelWidth)
const formattedTokens = this.formatNumber(tokens)
let pct = percentage
if (percentageNum < 10) {
pct = " " + pct
}
const tokensPart = `(${formattedTokens})`
const spacesNeeded = Math.max(1, this.TOKEN_SPACING - tokensPart.length)
const spacing = " ".repeat(spacesNeeded)
return `${labelPadded} ${bar} ${spacing}${pct}% ${tokensPart}`
}
format(analysis: TokenAnalysis): string {
const inputCategories = [
{ label: "SYSTEM", tokens: analysis.categories.system.totalTokens },
{ label: "USER", tokens: analysis.categories.user.totalTokens },
{ label: "TOOLS", tokens: analysis.categories.tools.totalTokens },
]
const outputCategories = [
{ label: "ASSISTANT", tokens: analysis.categories.assistant.totalTokens },
{ label: "REASONING", tokens: analysis.categories.reasoning.totalTokens },
]
const topEntries = this.collectTopEntries(analysis, 5)
const toolStats = new Map<string, { tokens: number; calls: number }>()
for (const [toolName, calls] of analysis.toolCallCounts.entries()) {
toolStats.set(toolName, { tokens: 0, calls })
}
for (const entry of analysis.categories.tools.allEntries) {
const existing = toolStats.get(entry.label) || { tokens: 0, calls: 0 }
toolStats.set(entry.label, { ...existing, tokens: entry.tokens })
}
const toolEntries = Array.from(toolStats.entries())
.map(([label, stats]) => ({ label, tokens: stats.tokens, calls: stats.calls }))
.sort((a, b) => b.tokens - a.tokens)
const costEstimate = this.costCalculator.calculateCost(analysis)
return this.formatVisualOutput(
analysis.sessionID,
analysis.model.name,
analysis.totalTokens,
analysis.inputTokens,
analysis.outputTokens,
analysis.reasoningTokens,
analysis.cacheReadTokens,
analysis.cacheWriteTokens,
analysis.assistantMessageCount,
analysis.mostRecentInput,
analysis.mostRecentOutput,
analysis.mostRecentReasoning,
analysis.mostRecentCacheRead,
analysis.mostRecentCacheWrite,
inputCategories,
outputCategories,
topEntries,
toolEntries,
costEstimate,
analysis.subagentAnalysis
)
}
private formatVisualOutput(
sessionID: string,
modelName: string,
totalTokens: number,
inputTokens: number,
outputTokens: number,
reasoningTokens: number,
cacheReadTokens: number,
cacheWriteTokens: number,
assistantMessageCount: number,
mostRecentInput: number,
mostRecentOutput: number,
mostRecentReasoning: number,
mostRecentCacheRead: number,
mostRecentCacheWrite: number,
inputCategories: Array<{ label: string; tokens: number }>,
outputCategories: Array<{ label: string; tokens: number }>,
topEntries: CategoryEntry[],
toolEntries: Array<{ label: string; tokens: number; calls: number }>,
cost: CostEstimate,
subagentAnalysis?: SubagentAnalysis
): string {
const lines: string[] = []
const sessionTotal = inputTokens + cacheReadTokens + cacheWriteTokens + outputTokens + reasoningTokens
const mainCost = cost.isSubscription ? cost.estimatedSessionCost : cost.apiSessionCost
// Header
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(`Token Analysis: Session ${sessionID}`)
lines.push(`Model: ${modelName}`)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(``)
// 1. TOKEN BREAKDOWN BY CATEGORY
lines.push(`TOKEN BREAKDOWN BY CATEGORY`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(`Estimated using tokenizer analysis of message content:`)
lines.push(``)
const inputTotal = inputCategories.reduce((sum, cat) => sum + cat.tokens, 0)
lines.push(`Input Categories:`)
for (const category of inputCategories) {
const barLine = this.formatCategoryBar(category.label, category.tokens, inputTotal)
if (barLine) lines.push(` ${barLine}`)
}
lines.push(``)
lines.push(` Subtotal: ${this.formatNumber(inputTotal)} estimated input tokens`)
lines.push(``)
const outputTotal = outputCategories.reduce((sum, cat) => sum + cat.tokens, 0)
lines.push(`Output Categories:`)
for (const category of outputCategories) {
const barLine = this.formatCategoryBar(category.label, category.tokens, outputTotal)
if (barLine) lines.push(` ${barLine}`)
}
lines.push(``)
lines.push(` Subtotal: ${this.formatNumber(outputTotal)} estimated output tokens`)
lines.push(``)
lines.push(`Local Total: ${this.formatNumber(totalTokens)} tokens (estimated)`)
// 2. TOOL USAGE BREAKDOWN (right after token breakdown)
if (toolEntries.length > 0) {
const toolsTotalTokens = inputCategories.find((c) => c.label === "TOOLS")?.tokens || 0
lines.push(``)
lines.push(`TOOL USAGE BREAKDOWN`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
for (const tool of toolEntries) {
const barLine = this.formatCategoryBar(tool.label, tool.tokens, toolsTotalTokens, this.TOOL_LABEL_WIDTH)
if (barLine) {
const calls = `${tool.calls}x`.padStart(5)
lines.push(`${barLine} ${calls}`)
}
}
}
// 3. TOP CONTRIBUTORS
if (topEntries.length > 0) {
lines.push(``)
lines.push(`TOP CONTRIBUTORS`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
for (const entry of topEntries) {
const percentage = ((entry.tokens / totalTokens) * 100).toFixed(1)
const label = `\u2022 ${entry.label}`.padEnd(this.TOP_CONTRIBUTOR_LABEL_WIDTH)
const formattedTokens = this.formatNumber(entry.tokens)
lines.push(`${label} ${formattedTokens} tokens (${percentage}%)`)
}
}
// 4. MOST RECENT API CALL
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(`MOST RECENT API CALL`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
lines.push(`Raw telemetry from last API response:`)
lines.push(` Input (fresh): ${this.formatNumber(mostRecentInput).padStart(10)} tokens`)
lines.push(` Cache read: ${this.formatNumber(mostRecentCacheRead).padStart(10)} tokens`)
if (mostRecentCacheWrite > 0) {
lines.push(` Cache write: ${this.formatNumber(mostRecentCacheWrite).padStart(10)} tokens`)
}
lines.push(` Output: ${this.formatNumber(mostRecentOutput).padStart(10)} tokens`)
if (mostRecentReasoning > 0) {
lines.push(` Reasoning: ${this.formatNumber(mostRecentReasoning).padStart(10)} tokens`)
}
lines.push(` \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(
` Total: ${this.formatNumber(mostRecentInput + mostRecentCacheRead + mostRecentCacheWrite + mostRecentOutput + mostRecentReasoning).padStart(10)} tokens`
)
// 5. SESSION TOTALS
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(`SESSION TOTALS (All ${assistantMessageCount} API calls)`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
lines.push(`Total tokens processed across the entire session (for cost calculation):`)
lines.push(``)
lines.push(` Input tokens: ${this.formatNumber(inputTokens).padStart(10)} (fresh tokens across all calls)`)
lines.push(` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)} (cached tokens across all calls)`)
lines.push(` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)} (tokens written to cache)`)
lines.push(` Output tokens: ${this.formatNumber(outputTokens).padStart(10)} (all model responses)`)
if (reasoningTokens > 0) {
lines.push(` Reasoning tokens: ${this.formatNumber(reasoningTokens).padStart(10)} (thinking/reasoning)`)
}
lines.push(` \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(` Session Total: ${this.formatNumber(sessionTotal).padStart(10)} tokens (for billing)`)
// 6. SESSION COST / ESTIMATED SESSION COST
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
if (cost.isSubscription) {
lines.push(`ESTIMATED SESSION COST (API Key Pricing)`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
lines.push(`You appear to be on a subscription plan (API cost is $0).`)
lines.push(`Here's what this session would cost with direct API access:`)
lines.push(``)
lines.push(
` Input tokens: ${this.formatNumber(inputTokens).padStart(10)} \u00d7 $${cost.pricePerMillionInput.toFixed(2)}/M = $${cost.estimatedInputCost.toFixed(4)}`
)
lines.push(
` Output tokens: ${this.formatNumber(outputTokens + reasoningTokens).padStart(10)} \u00d7 $${cost.pricePerMillionOutput.toFixed(2)}/M = $${cost.estimatedOutputCost.toFixed(4)}`
)
if (cacheReadTokens > 0 && cost.pricePerMillionCacheRead > 0) {
lines.push(
` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)} \u00d7 $${cost.pricePerMillionCacheRead.toFixed(2)}/M = $${cost.estimatedCacheReadCost.toFixed(4)}`
)
}
if (cacheWriteTokens > 0 && cost.pricePerMillionCacheWrite > 0) {
lines.push(
` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)} \u00d7 $${cost.pricePerMillionCacheWrite.toFixed(2)}/M = $${cost.estimatedCacheWriteCost.toFixed(4)}`
)
}
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(`ESTIMATED TOTAL: $${cost.estimatedSessionCost.toFixed(4)}`)
lines.push(``)
lines.push(`Note: This estimate uses standard API pricing from models.json.`)
lines.push(`Actual API costs may vary based on provider and context size.`)
} else {
lines.push(`SESSION COST`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
lines.push(`Token usage breakdown:`)
lines.push(` Input tokens: ${this.formatNumber(inputTokens).padStart(10)}`)
lines.push(` Output tokens: ${this.formatNumber(outputTokens).padStart(10)}`)
if (reasoningTokens > 0) {
lines.push(` Reasoning tokens: ${this.formatNumber(reasoningTokens).padStart(10)}`)
}
if (cacheReadTokens > 0) {
lines.push(` Cache read: ${this.formatNumber(cacheReadTokens).padStart(10)}`)
}
if (cacheWriteTokens > 0) {
lines.push(` Cache write: ${this.formatNumber(cacheWriteTokens).padStart(10)}`)
}
lines.push(``)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(`ACTUAL COST (from API): $${cost.apiSessionCost.toFixed(4)}`)
const diff = Math.abs(cost.apiSessionCost - cost.estimatedSessionCost)
const diffPercent = cost.apiSessionCost > 0 ? (diff / cost.apiSessionCost) * 100 : 0
if (diffPercent > 5) {
lines.push(
`Estimated cost: $${cost.estimatedSessionCost.toFixed(4)} (${diffPercent > 0 ? (cost.estimatedSessionCost > cost.apiSessionCost ? "+" : "-") : ""}${diffPercent.toFixed(1)}% diff)`
)
}
lines.push(``)
lines.push(`Note: Actual cost from OpenCode includes provider-specific pricing`)
lines.push(`and 200K+ context adjustments.`)
}
// 7. SUBAGENT COSTS (if any)
if (subagentAnalysis && subagentAnalysis.subagents.length > 0) {
const subagentLabelWidth = 25
const subagentTotalCost = cost.isSubscription
? subagentAnalysis.totalEstimatedCost
: subagentAnalysis.totalApiCost
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(
`SUBAGENT COSTS (${subagentAnalysis.subagents.length} child sessions, ${subagentAnalysis.totalApiCalls} API calls)`
)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
for (const subagent of subagentAnalysis.subagents) {
const label = `${subagent.agentType}`.padEnd(subagentLabelWidth)
const costStr = cost.isSubscription
? `$${subagent.estimatedCost.toFixed(4)}`
: `$${subagent.apiCost.toFixed(4)}`
const tokensStr = `(${this.formatNumber(subagent.totalTokens)} tokens, ${subagent.assistantMessageCount} calls)`
lines.push(` ${label} ${costStr.padStart(10)} ${tokensStr}`)
}
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(
`Subagent Total:${" ".repeat(subagentLabelWidth - 14)} $${subagentTotalCost.toFixed(4)} (${this.formatNumber(subagentAnalysis.totalTokens)} tokens, ${subagentAnalysis.totalApiCalls} calls)`
)
}
// 8. SUMMARY (always last)
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
lines.push(`SUMMARY`)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(``)
lines.push(` Cost Tokens API Calls`)
if (subagentAnalysis && subagentAnalysis.subagents.length > 0) {
const subagentTotalCost = cost.isSubscription
? subagentAnalysis.totalEstimatedCost
: subagentAnalysis.totalApiCost
const grandTotalCost = mainCost + subagentTotalCost
const grandTotalTokens = sessionTotal + subagentAnalysis.totalTokens
const grandTotalApiCalls = assistantMessageCount + subagentAnalysis.totalApiCalls
lines.push(
` Main session: $${mainCost.toFixed(4).padStart(10)} ${this.formatNumber(sessionTotal).padStart(10)} ${assistantMessageCount.toString().padStart(5)}`
)
lines.push(
` Subagents: $${subagentTotalCost.toFixed(4).padStart(10)} ${this.formatNumber(subagentAnalysis.totalTokens).padStart(10)} ${subagentAnalysis.totalApiCalls.toString().padStart(5)}`
)
lines.push(`\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500`)
lines.push(
` TOTAL: $${grandTotalCost.toFixed(4).padStart(10)} ${this.formatNumber(grandTotalTokens).padStart(10)} ${grandTotalApiCalls.toString().padStart(5)}`
)
} else {
lines.push(
` Session: $${mainCost.toFixed(4).padStart(10)} ${this.formatNumber(sessionTotal).padStart(10)} ${assistantMessageCount.toString().padStart(5)}`
)
}
lines.push(``)
lines.push(`\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550`)
return lines.join("\n")
}
private collectTopEntries(analysis: TokenAnalysis, limit: number): CategoryEntry[] {
const pool = [
...analysis.categories.system.allEntries,
...analysis.categories.user.allEntries,
...analysis.categories.assistant.allEntries,
...analysis.categories.tools.allEntries,
...analysis.categories.reasoning.allEntries,
]
.filter((entry) => entry.tokens > 0)
.sort((a, b) => b.tokens - a.tokens)
return pool.slice(0, limit)
}
private formatNumber(value: number): string {
return new Intl.NumberFormat("en-US").format(value)
}
}
[refactor-the-code 7c6e17c] refactor: extract OutputFormatter to lib/formatter.ts
1 file changed, 364 insertions(+)
create mode 100644 plugin/lib/formatter.ts
Now refactoring tokenscope.ts to keep only the Plugin export with imports:
// OpenCode Token Analyzer Plugin - Main Entry Point
import type { Plugin } from "@opencode-ai/plugin"
import { tool } from "@opencode-ai/plugin"
import path from "path"
import fs from "fs/promises"
import type { SessionMessage } from "./lib/types"
import { DEFAULT_ENTRY_LIMIT, loadModelPricing } from "./lib/config"
import { TokenizerManager } from "./lib/tokenizer"
import { ModelResolver, ContentCollector, TokenAnalysisEngine } from "./lib/analyzer"
import { CostCalculator } from "./lib/cost"
import { SubagentAnalyzer } from "./lib/subagent"
import { OutputFormatter } from "./lib/formatter"
export const TokenAnalyzerPlugin: Plugin = async ({ client }) => {
const pricingData = await loadModelPricing()
const tokenizerManager = new TokenizerManager()
const modelResolver = new ModelResolver()
const contentCollector = new ContentCollector()
const analysisEngine = new TokenAnalysisEngine(tokenizerManager, contentCollector)
const costCalculator = new CostCalculator(pricingData)
const subagentAnalyzer = new SubagentAnalyzer(client, costCalculator, pricingData)
const formatter = new OutputFormatter(costCalculator)
return {
tool: {
tokenscope: tool({
description:
"Analyze token usage across the current session with detailed breakdowns by category (system, user, assistant, tools, reasoning). " +
"Provides visual charts, identifies top token consumers, and includes costs from subagent (Task tool) child sessions.",
args: {
sessionID: tool.schema.string().optional(),
limitMessages: tool.schema.number().int().min(1).max(10).optional(),
includeSubagents: tool.schema
.boolean()
.optional()
.describe("Include token costs from subagent child sessions (default: true)"),
},
async execute(args, context) {
const sessionID = args.sessionID ?? context.sessionID
if (!sessionID) {
throw new Error("No session ID available for token analysis")
}
const response = await client.session.messages({ path: { id: sessionID } })
const messages: SessionMessage[] = ((response as any)?.data ?? response ?? []) as SessionMessage[]
if (!Array.isArray(messages) || messages.length === 0) {
return `Session ${sessionID} has no messages yet.`
}
const tokenModel = modelResolver.resolveTokenModel(messages)
const analysis = await analysisEngine.analyze(
sessionID,
messages,
tokenModel,
args.limitMessages ?? DEFAULT_ENTRY_LIMIT
)
if (args.includeSubagents !== false) {
analysis.subagentAnalysis = await subagentAnalyzer.analyzeChildSessions(sessionID)
}
const output = formatter.format(analysis)
const outputPath = path.join(process.cwd(), "token-usage-output.txt")
try {
try {
await fs.unlink(outputPath)
} catch {}
await fs.writeFile(outputPath, output, { encoding: "utf8", flag: "w" })
} catch (error) {
throw new Error(`Failed to write token analysis to ${outputPath}: ${error}`)
}
const timestamp = new Date().toISOString()
const formattedTotal = new Intl.NumberFormat("en-US").format(analysis.totalTokens)
let summaryMsg = `Token analysis complete! Full report saved to: ${outputPath}\n\nTimestamp: ${timestamp}\nMain session tokens: ${formattedTotal}`
if (analysis.subagentAnalysis && analysis.subagentAnalysis.subagents.length > 0) {
const subagentTokens = new Intl.NumberFormat("en-US").format(analysis.subagentAnalysis.totalTokens)
const grandTotal = new Intl.NumberFormat("en-US").format(
analysis.totalTokens + analysis.subagentAnalysis.totalTokens
)
summaryMsg += `\nSubagent sessions: ${analysis.subagentAnalysis.subagents.length} (${subagentTokens} tokens)`
summaryMsg += `\nGrand total: ${grandTotal} tokens`
}
summaryMsg += `\n\nUse: cat token-usage-output.txt (or read the file) to view the complete analysis.`
return summaryMsg
},
}),
},
}
}
[refactor-the-code 6c221ca] refactor: simplify tokenscope.ts to use lib modules
1 file changed, 27 insertions(+), 1267 deletions(-)
Now updating install.sh:
<file>
00001| #!/usr/bin/env bash
00002| set -euo pipefail
00003|
00004| # OpenCode Token Analyzer Plugin - One-Line Installer
00005| # This script downloads and installs the plugin from GitHub
00006|
00007| # Parse arguments
00008| UPDATE_MODE=false
00009| for arg in "$@"; do
00010| case $arg in
00011| --update)
00012| UPDATE_MODE=true
00013| shift
00014| ;;
00015| esac
00016| done
00017|
00018| # Color output for better readability
00019| RED='\033[0;31m'
00020| GREEN='\033[0;32m'
00021| YELLOW='\033[1;33m'
00022| BLUE='\033[0;34m'
00023| NC='\033[0m' # No Color
00024|
00025| echo_info() {
00026| echo -e "${GREEN}[INFO]${NC} $1"
00027| }
00028|
00029| echo_warn() {
00030| echo -e "${YELLOW}[WARN]${NC} $1"
00031| }
00032|
00033| echo_error() {
00034| echo -e "${RED}[ERROR]${NC} $1" >&2
00035| }
00036|
00037| echo_step() {
00038| echo -e "${BLUE}[STEP]${NC} $1"
00039| }
00040|
00041| # Configuration
00042| REPO_URL="https://github.com/ramtinJ95/opencode-tokenscope"
00043| OPENCODE_DIR="${HOME}/.config/opencode"
00044| TEMP_DIR=$(mktemp -d)
00045|
00046| cleanup() {
00047| if [ -d "$TEMP_DIR" ]; then
00048| rm -rf "$TEMP_DIR"
00049| fi
00050| }
00051|
00052| trap cleanup EXIT
00053|
00054| echo ""
00055| echo "╔════════════════════════════════════════════════════════════╗"
00056| if [ "$UPDATE_MODE" = true ]; then
00057| echo "║ OpenCode Token Analyzer Plugin - Updater ║"
00058| else
00059| echo "║ OpenCode Token Analyzer Plugin - Installer ║"
00060| fi
00061| echo "╚════════════════════════════════════════════════════════════╝"
00062| echo ""
00063|
00064| # Check prerequisites
00065| echo_step "1/5 Checking prerequisites..."
00066|
00067| if [ ! -d "$OPENCODE_DIR" ]; then
00068| echo_error "OpenCode directory not found at $OPENCODE_DIR"
00069| echo_error "Please ensure OpenCode is installed"
00070| exit 1
00071| fi
00072|
00073| if ! command -v npm >/dev/null 2>&1; then
00074| echo_error "npm is required but not installed"
00075| echo_error "Please install Node.js and npm first"
00076| exit 1
00077| fi
00078|
00079| if ! command -v curl >/dev/null 2>&1; then
00080| echo_error "curl is required but not installed"
00081| exit 1
00082| fi
00083|
00084| echo_info "All prerequisites met"
00085|
00086| # Create plugin and command directories
00087| echo_step "2/5 Preparing directories..."
00088| mkdir -p "$OPENCODE_DIR/plugin"
00089| mkdir -p "$OPENCODE_DIR/command"
00090| echo_info "Directories ready"
00091|
00092| # Download files
00093| echo_step "3/5 Downloading plugin files..."
00094|
00095| FILES=(
00096| "plugin/tokenscope.ts"
00097| "plugin/models.json"
00098| "plugin/package.json"
00099| "plugin/install.sh"
00100| "command/tokenscope.md"
00101| )
00102|
00103| for file in "${FILES[@]}"; do
00104| filename=$(basename "$file")
00105| dir=$(dirname "$file")
00106|
00107| echo_info "Downloading $filename..."
00108|
00109| if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
00110| # Move to appropriate directory
00111| if [ "$dir" = "plugin" ]; then
00112| mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
00113| else
00114| mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
00115| fi
00116| else
00117| echo_error "Failed to download $file"
00118| echo_error "Please check your internet connection and try again"
00119| exit 1
00120| fi
00121| done
00122|
00123| echo_info "All files downloaded successfully"
00124|
00125| # Install dependencies
00126| echo_step "4/5 Installing dependencies..."
00127|
00128| DEPS_EXIST=false
00129| if [ -d "$OPENCODE_DIR/plugin/node_modules/js-tiktoken" ] && [ -d "$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers" ]; then
00130| DEPS_EXIST=true
00131| fi
00132|
00133| if [ "$UPDATE_MODE" = true ] && [ "$DEPS_EXIST" = true ]; then
00134| echo_info "Update mode: Dependencies already installed, skipping..."
00135| else
00136| echo_info "This may take 1-2 minutes..."
00137| cd "$OPENCODE_DIR/plugin"
00138| if npm install --prefix "$OPENCODE_DIR/plugin" js-tiktoken@1.0.15 @huggingface/transformers@3.1.2 --save; then
00139| echo_info "Dependencies installed successfully"
00140| else
00141| echo_error "Failed to install dependencies"
00142| echo_error "You can try running manually: cd ~/.config/opencode/plugin && npm install"
00143| exit 1
00144| fi
00145| fi
00146|
00147| # Verify installation
00148| echo_step "5/5 Verifying installation..."
00149|
00150| REQUIRED_FILES=(
00151| "$OPENCODE_DIR/plugin/tokenscope.ts"
00152| "$OPENCODE_DIR/plugin/models.json"
00153| "$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
00154| "$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
00155| "$OPENCODE_DIR/command/tokenscope.md"
00156| )
00157|
00158| all_present=true
00159| for file in "${REQUIRED_FILES[@]}"; do
00160| if [ ! -e "$file" ]; then
00161| echo_warn "Missing: $file"
00162| all_present=false
00163| fi
00164| done
00165|
00166| if [ "$all_present" = true ]; then
00167| echo_info "All files verified"
00168| else
00169| echo_error "Some files are missing"
00170| exit 1
00171| fi
00172|
00173| # Get installed version
00174| INSTALLED_VERSION=$(grep -o '"version": *"[^"]*"' "$OPENCODE_DIR/plugin/package.json" | cut -d'"' -f4)
00175|
00176| echo ""
00177| echo "╔════════════════════════════════════════════════════════════╗"
00178| if [ "$UPDATE_MODE" = true ]; then
00179| echo "║ Update Complete! ║"
00180| else
00181| echo "║ Installation Complete! ║"
00182| fi
00183| echo "╚════════════════════════════════════════════════════════════╝"
00184| echo ""
00185| echo_info "Version: $INSTALLED_VERSION"
00186| echo_info "Plugin installed at: $OPENCODE_DIR/plugin/tokenscope.ts"
00187| echo_info "Command installed at: $OPENCODE_DIR/command/tokenscope.md"
00188| echo ""
00189| echo_step "Next steps:"
00190| echo " 1. Restart OpenCode"
00191| echo " 2. Type /tokenscope in any session"
00192| echo " 3. View full report: cat token-usage-output.txt"
00193| echo ""
00194| echo_info "For help and documentation, visit:"
00195| echo_info "$REPO_URL"
00196| echo ""
00197|
00198|
(End of file - total 198 lines)
</file>
#!/usr/bin/env bash
set -euo pipefail
# OpenCode Token Analyzer Plugin - One-Line Installer
# This script downloads and installs the plugin from GitHub
# Parse arguments
UPDATE_MODE=false
for arg in "$@"; do
case $arg in
--update)
UPDATE_MODE=true
shift
;;
esac
done
# Color output for better readability
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
echo_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
echo_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
echo_step() {
echo -e "${BLUE}[STEP]${NC} $1"
}
# Configuration
REPO_URL="https://github.com/ramtinJ95/opencode-tokenscope"
OPENCODE_DIR="${HOME}/.config/opencode"
TEMP_DIR=$(mktemp -d)
cleanup() {
if [ -d "$TEMP_DIR" ]; then
rm -rf "$TEMP_DIR"
fi
}
trap cleanup EXIT
echo ""
echo "╔════════════════════════════════════════════════════════════╗"
if [ "$UPDATE_MODE" = true ]; then
echo "║ OpenCode Token Analyzer Plugin - Updater ║"
else
echo "║ OpenCode Token Analyzer Plugin - Installer ║"
fi
echo "╚════════════════════════════════════════════════════════════╝"
echo ""
# Check prerequisites
echo_step "1/5 Checking prerequisites..."
if [ ! -d "$OPENCODE_DIR" ]; then
echo_error "OpenCode directory not found at $OPENCODE_DIR"
echo_error "Please ensure OpenCode is installed"
exit 1
fi
if ! command -v npm >/dev/null 2>&1; then
echo_error "npm is required but not installed"
echo_error "Please install Node.js and npm first"
exit 1
fi
if ! command -v curl >/dev/null 2>&1; then
echo_error "curl is required but not installed"
exit 1
fi
echo_info "All prerequisites met"
# Create plugin and command directories
echo_step "2/5 Preparing directories..."
mkdir -p "$OPENCODE_DIR/plugin"
mkdir -p "$OPENCODE_DIR/plugin/lib"
mkdir -p "$OPENCODE_DIR/command"
echo_info "Directories ready"
# Download files
echo_step "3/5 Downloading plugin files..."
FILES=(
"plugin/tokenscope.ts"
"plugin/lib/types.ts"
"plugin/lib/config.ts"
"plugin/lib/tokenizer.ts"
"plugin/lib/analyzer.ts"
"plugin/lib/cost.ts"
"plugin/lib/subagent.ts"
"plugin/lib/formatter.ts"
"plugin/models.json"
"plugin/package.json"
"plugin/install.sh"
"command/tokenscope.md"
)
for file in "${FILES[@]}"; do
filename=$(basename "$file")
dir=$(dirname "$file")
echo_info "Downloading $filename..."
if curl -fsSL "$REPO_URL/raw/main/$file" -o "$TEMP_DIR/$filename" 2>/dev/null; then
# Move to appropriate directory based on path
if [ "$dir" = "plugin" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
elif [ "$dir" = "plugin/lib" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/lib/$filename"
else
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
fi
else
echo_error "Failed to download $file"
echo_error "Please check your internet connection and try again"
exit 1
fi
done
echo_info "All files downloaded successfully"
# Install dependencies
echo_step "4/5 Installing dependencies..."
DEPS_EXIST=false
if [ -d "$OPENCODE_DIR/plugin/node_modules/js-tiktoken" ] && [ -d "$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers" ]; then
DEPS_EXIST=true
fi
if [ "$UPDATE_MODE" = true ] && [ "$DEPS_EXIST" = true ]; then
echo_info "Update mode: Dependencies already installed, skipping..."
else
echo_info "This may take 1-2 minutes..."
cd "$OPENCODE_DIR/plugin"
if npm install --prefix "$OPENCODE_DIR/plugin" js-tiktoken@1.0.15 @huggingface/transformers@3.1.2 --save; then
echo_info "Dependencies installed successfully"
else
echo_error "Failed to install dependencies"
echo_error "You can try running manually: cd ~/.config/opencode/plugin && npm install"
exit 1
fi
fi
# Verify installation
echo_step "5/5 Verifying installation..."
REQUIRED_FILES=(
"$OPENCODE_DIR/plugin/tokenscope.ts"
"$OPENCODE_DIR/plugin/lib/types.ts"
"$OPENCODE_DIR/plugin/lib/config.ts"
"$OPENCODE_DIR/plugin/lib/tokenizer.ts"
"$OPENCODE_DIR/plugin/lib/analyzer.ts"
"$OPENCODE_DIR/plugin/lib/cost.ts"
"$OPENCODE_DIR/plugin/lib/subagent.ts"
"$OPENCODE_DIR/plugin/lib/formatter.ts"
"$OPENCODE_DIR/plugin/models.json"
"$OPENCODE_DIR/plugin/node_modules/js-tiktoken"
"$OPENCODE_DIR/plugin/node_modules/@huggingface/transformers"
"$OPENCODE_DIR/command/tokenscope.md"
)
all_present=true
for file in "${REQUIRED_FILES[@]}"; do
if [ ! -e "$file" ]; then
echo_warn "Missing: $file"
all_present=false
fi
done
if [ "$all_present" = true ]; then
echo_info "All files verified"
else
echo_error "Some files are missing"
exit 1
fi
# Get installed version
INSTALLED_VERSION=$(grep -o '"version": *"[^"]*"' "$OPENCODE_DIR/plugin/package.json" | cut -d'"' -f4)
echo ""
echo "╔════════════════════════════════════════════════════════════╗"
if [ "$UPDATE_MODE" = true ]; then
echo "║ Update Complete! ║"
else
echo "║ Installation Complete! ║"
fi
echo "╚════════════════════════════════════════════════════════════╝"
echo ""
echo_info "Version: $INSTALLED_VERSION"
echo_info "Plugin installed at: $OPENCODE_DIR/plugin/tokenscope.ts"
echo_info "Command installed at: $OPENCODE_DIR/command/tokenscope.md"
echo ""
echo_step "Next steps:"
echo " 1. Restart OpenCode"
echo " 2. Type /tokenscope in any session"
echo " 3. View full report: cat token-usage-output.txt"
echo ""
echo_info "For help and documentation, visit:"
echo_info "$REPO_URL"
echo ""
[refactor-the-code d1fe439] refactor: update install.sh for multi-file plugin structure
1 file changed, 18 insertions(+), 2 deletions(-)
Now updating package.json to bump the version:
"version": "1.2.1"
"version": "1.3.0"
[refactor-the-code 238eea2] chore: bump version to 1.3.0
1 file changed, 1 insertion(+), 1 deletion(-)
Now updating the README.md to reflect the new architecture:
<file>
00401|
00402| - **Cache Read**: Tokens retrieved from cache (discounted rate ~90% off)
00403| - **Cache Write**: Tokens written to cache (slight premium ~25% more)
00404| - **Note**: Cache write is a billing charge, not additional context tokens
00405|
00406| ## Architecture
00407|
00408| ### Core Components
00409|
00410| 1. **TokenizerManager**: Loads and caches tokenizers (tiktoken, transformers)
00411| 2. **ModelResolver**: Detects model and selects appropriate tokenizer
00412| 3. **ContentCollector**: Extracts content from session messages, including tool call counts
00413| 4. **TokenAnalysisEngine**: Counts tokens and applies API telemetry adjustments
00414| 5. **CostCalculator**: Calculates costs from pricing database with cache-aware pricing
00415| 6. **SubagentAnalyzer**: Recursively fetches and analyzes child sessions from Task tool calls
00416| 7. **OutputFormatter**: Generates visual reports with charts and summaries
00417|
00418| ## Privacy & Security
00419|
00420| - **All processing is local**: No session data sent to external services
00421| - **Tokenizers from official sources**:
00422| - OpenAI tokenizers: npm registry
00423| - Transformers: Hugging Face Hub
00424| - **Open source**: Audit the code yourself
00425|
00426| ## Performance
00427|
00428| - **Fast**: Tokenizers cached after first load
00429| - **Parallel**: Categories processed concurrently
00430| - **Efficient**: Only analyzes on demand
00431| - **First-run download**: Transformers models download on demand (5-50MB per model)
00432| - **Subsequent runs**: Instant (uses cache)
00433|
00434| ## Contributing
00435|
00436| Contributions welcome! Ideas for enhancement:
00437|
00438| - Historical trend analysis
00439| - Export to CSV/JSON/PDF
00440| - Optimization suggestions
00441| - Custom categorization rules
00442| - Real-time monitoring with alerts
00443| - Compare sessions
00444| - Token burn rate calculation
00445|
00446| ## Support
00447|
00448| - **Issues**: [GitHub Issues](https://github.com/ramtinJ95/opencode-tokenscope/issues)
00449| - **Discussions**: [GitHub Discussions](https://github.com/ramtinJ95/opencode-tokenscope/discussions)
00450|
(End of file - total 450 lines)
</file>
## Architecture
### Core Components
1. **TokenizerManager**: Loads and caches tokenizers (tiktoken, transformers)
2. **ModelResolver**: Detects model and selects appropriate tokenizer
3. **ContentCollector**: Extracts content from session messages, including tool call counts
4. **TokenAnalysisEngine**: Counts tokens and applies API telemetry adjustments
5. **CostCalculator**: Calculates costs from pricing database with cache-aware pricing
6. **SubagentAnalyzer**: Recursively fetches and analyzes child sessions from Task tool calls
7. **OutputFormatter**: Generates visual reports with charts and summaries
## Architecture
### File Structure
```
plugin/
├── tokenscope.ts # Main entry point - Plugin export
├── lib/
│ ├── types.ts # All interfaces and type definitions
│ ├── config.ts # Constants, model maps, pricing loader
│ ├── tokenizer.ts # TokenizerManager class
│ ├── analyzer.ts # ModelResolver, ContentCollector, TokenAnalysisEngine
│ ├── cost.ts # CostCalculator class
│ ├── subagent.ts # SubagentAnalyzer class
│ └── formatter.ts # OutputFormatter class
├── models.json # Pricing data for 41+ models
├── package.json # Plugin metadata
└── install.sh # Installation script
```
### Core Components
1. **TokenizerManager** (`lib/tokenizer.ts`): Loads and caches tokenizers (tiktoken, transformers)
2. **ModelResolver** (`lib/analyzer.ts`): Detects model and selects appropriate tokenizer
3. **ContentCollector** (`lib/analyzer.ts`): Extracts content from session messages, including tool call counts
4. **TokenAnalysisEngine** (`lib/analyzer.ts`): Counts tokens and applies API telemetry adjustments
5. **CostCalculator** (`lib/cost.ts`): Calculates costs from pricing database with cache-aware pricing
6. **SubagentAnalyzer** (`lib/subagent.ts`): Recursively fetches and analyzes child sessions from Task tool calls
7. **OutputFormatter** (`lib/formatter.ts`): Generates visual reports with charts and summaries
[refactor-the-code c85d3df] docs: update README with new multi-file architecture
1 file changed, 25 insertions(+), 7 deletions(-)
All tasks are complete. Let me show you a summary of what was done:
c85d3df docs: update README with new multi-file architecture
238eea2 chore: bump version to 1.3.0
d1fe439 refactor: update install.sh for multi-file plugin structure
6c221ca refactor: simplify tokenscope.ts to use lib modules
7c6e17c refactor: extract OutputFormatter to lib/formatter.ts
a386101 refactor: extract SubagentAnalyzer to lib/subagent.ts
22fa67d refactor: extract CostCalculator to lib/cost.ts
1dc5297 refactor: extract analysis classes to lib/analyzer.ts
b8a8753 refactor: extract TokenizerManager to lib/tokenizer.ts
b244d3a refactor: extract config and model maps to lib/config.ts
f95be04 refactor: extract types and interfaces to lib/types.ts
0c28f19 refactor: add plugin/lib directory for multi-file structure
f5a9a60 Remove duplicate install.sh, keep only plugin/install.sh
bc33e37 Bump version to 1.2.1
93bd925 Clarify update instructions for different versions
total 28
drwxr-xr-x 1 ramtinj ramtinj 110 Dec 13 13:35 .
drwxr-xr-x 1 ramtinj ramtinj 130 Dec 6 14:20 ..
-rwxr-xr-x 1 ramtinj ramtinj 6340 Dec 13 13:42 install.sh
drwxr-xr-x 1 ramtinj ramtinj 156 Dec 13 13:41 lib
-rw-r--r-- 1 ramtinj ramtinj 5361 Nov 29 10:10 models.json
-rw-r--r-- 1 ramtinj ramtinj 170 Dec 13 13:43 package.json
-rw-r--r-- 1 ramtinj ramtinj 4293 Dec 13 13:42 tokenscope.ts
drwxr-xr-x 1 ramtinj ramtinj 48 Nov 10 21:45 vendor
---
total 68
drwxr-xr-x 1 ramtinj ramtinj 156 Dec 13 13:41 .
drwxr-xr-x 1 ramtinj ramtinj 110 Dec 13 13:35 ..
-rw-r--r-- 1 ramtinj ramtinj 13945 Dec 13 13:38 analyzer.ts
-rw-r--r-- 1 ramtinj ramtinj 3307 Dec 13 13:36 config.ts
-rw-r--r-- 1 ramtinj ramtinj 2401 Dec 13 13:39 cost.ts
-rw-r--r-- 1 ramtinj ramtinj 24964 Dec 13 13:41 formatter.ts
-rw-r--r-- 1 ramtinj ramtinj 0 Dec 13 13:35 .gitkeep
-rw-r--r-- 1 ramtinj ramtinj 5704 Dec 13 13:39 subagent.ts
-rw-r--r-- 1 ramtinj ramtinj 4078 Dec 13 13:37 tokenizer.ts
-rw-r--r-- 1 ramtinj ramtinj 3788 Dec 13 13:36 types.ts
Implementation complete. Here's a summary:
## Implementation Summary
### Commits Made (12 total)
1. refactor: add plugin/lib directory for multi-file structure
2. refactor: extract types and interfaces to lib/types.ts
3. refactor: extract config and model maps to lib/config.ts
4. refactor: extract TokenizerManager to lib/tokenizer.ts
5. refactor: extract analysis classes to lib/analyzer.ts
6. refactor: extract CostCalculator to lib/cost.ts
7. refactor: extract SubagentAnalyzer to lib/subagent.ts
8. refactor: extract OutputFormatter to lib/formatter.ts
9. refactor: simplify tokenscope.ts to use lib modules
10. refactor: update install.sh for multi-file plugin structure
11. chore: bump version to 1.3.0
12. docs: update README with new multi-file architecture
### New File Structure
plugin/
├── tokenscope.ts (~100 lines - main entry point)
├── lib/
│ ├── types.ts (~170 lines - interfaces & type guards)
│ ├── config.ts (~85 lines - constants & model maps)
│ ├── tokenizer.ts (~130 lines - TokenizerManager)
│ ├── analyzer.ts (~400 lines - ModelResolver, ContentCollector, TokenAnalysisEngine)
│ ├── cost.ts (~60 lines - CostCalculator)
│ ├── subagent.ts (~150 lines - SubagentAnalyzer)
│ └── formatter.ts (~365 lines - OutputFormatter)
├── models.json
├── package.json (version 1.3.0)
└── install.sh (updated for multi-file downloads)### Changes Made
- Split 1340-line monolithic file into 8 focused modules
- Updated install.sh to download all lib files and create the lib directory
- Bumped version to 1.3.0
- Updated README with new architecture documentation
The plugin is ready for you to test!
You are a code reviewer. Your job is to review code changes and provide actionable feedback.
---
Input: this branch
---
## Determining What to Review
Based on the input provided, determine which type of review to perform:
1. **No arguments (default)**: Review all uncommitted changes
- Run: `git diff` for unstaged changes
- Run: `git diff --cached` for staged changes
2. **Commit hash** (40-char SHA or short hash): Review that specific commit
- Run: `git show this branch`
3. **Branch name**: Compare current branch to the specified branch
- Run: `git diff this branch...HEAD`
4. **PR URL or number** (contains "github.com" or "pull" or looks like a PR number): Review the pull request
- Run: `gh pr view this branch` to get PR context
- Run: `gh pr diff this branch` to get the diff
Use best judgement when processing input.
---
## What to Look For
**Bugs** - Your primary focus.
- Logic errors, off-by-one mistakes, incorrect conditionals
- Edge cases: null/empty inputs, error conditions, race conditions
- Security issues: injection, auth bypass, data exposure
- Broken error handling that swallows failures
**Structure** - Does the code fit the codebase?
- Does it follow existing patterns and conventions?
- Are there established abstractions it should use but doesn't?
**Performance** - Only flag if obviously problematic.
- O(n²) on unbounded data, N+1 queries, blocking I/O on hot paths
## Before You Flag Something
Be certain. If you're going to call something a bug, you need to be confident it actually is one.
- Only review the changes - do not review pre-existing code that wasn't modified
- Don't flag something as a bug if you're unsure - investigate first
- Don't flag style preferences as issues
- Don't invent hypothetical problems - if an edge case matters, explain the realistic scenario where it breaks
- If you need more context to be sure, use the tools below to get it
## Tools
Use these to inform your review:
- **Explore agent** - Find how existing code handles similar problems. Check patterns, conventions, and prior art before claiming something doesn't fit.
- **Exa Code Context** - Verify correct usage of libraries/APIs before flagging something as wrong.
- **Exa Web Search** - Research best practices if you're unsure about a pattern.
If you're uncertain about something and can't verify it with these tools, say "I'm not sure about X" rather than flagging it as a definite issue.
## Tone and Approach
1. If there is a bug, be direct and clear about why it is a bug.
2. You should clearly communicate severity of issues, do not claim issues are more severe than they actually are.
3. Critiques should clearly and explicitly communicate the scenarios, environments, or inputs that are necessary for the bug to arise. The comment should immediately indicate that the issue's severity depends on these factors.
4. Your tone should be matter-of-fact and not accusatory or overly positive. It should read as a helpful AI assistant suggestion without sounding too much like a human reviewer.
5. Write in a manner that allows reader to quickly understand issue without reading too closely.
6. AVOID flattery, do not give any comments that are not helpful to the reader. Avoid phrasing like "Great job ...", "Thanks for ...".
Now I have a complete picture of the changes. This is a refactoring of a single-file plugin (~1300 lines) into a multi-file architecture with 7 modules. Let me review for issues.
---
## Code Review Summary
This PR refactors a monolithic `tokenscope.ts` (~1300 lines) into a multi-file architecture with 7 modules in `plugin/lib/`. The refactoring is clean and improves maintainability.
### Issues Found
**1. Bug: Unused imports in `analyzer.ts`** (Low severity)
`plugin/lib/analyzer.ts:1-13`
```typescript
import type {
SessionMessage,
SessionMessagePart,
TokenModel,
TokenAnalysis,
CategoryEntrySource,
CategoryEntry,
CategorySummary,
isToolPart, // <-- These are type imports
isReasoningPart, // <-- but they're functions, not types
isTextPart,
} from "./types"
```
The type guards `isToolPart`, `isReasoningPart`, and `isTextPart` are imported as types but also re-imported as values on the next line. The `import type` versions are unused. This is harmless but indicates a copy-paste oversight during refactoring.
**Suggested fix:** Remove the unused type imports:
```typescript
import type {
SessionMessage,
SessionMessagePart,
TokenModel,
TokenAnalysis,
CategoryEntrySource,
CategoryEntry,
CategorySummary,
} from "./types"
```
---
**2. Potential Issue: Duplicated pricing lookup logic** (Low severity, code quality)
`plugin/lib/subagent.ts:137-146` duplicates the `getPricing` logic from `CostCalculator` (`plugin/lib/cost.ts:43-53`).
The `SubagentAnalyzer` class receives a `CostCalculator` in its constructor but then implements its own `getPricing` method instead of delegating to the calculator. If pricing lookup logic changes in `CostCalculator`, it would need to be updated in two places.
**Suggested fix:** Either:
- Make `CostCalculator.getPricing()` public and use it in `SubagentAnalyzer`
- Or extract pricing lookup to a shared utility
---
**3. Potential Issue: `install.sh` filename extraction doesn't handle `plugin/lib/` correctly**
`plugin/install.sh:115-122`
```bash
if [ "$dir" = "plugin" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/$filename"
elif [ "$dir" = "plugin/lib" ]; then
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/plugin/lib/$filename"
else
mv "$TEMP_DIR/$filename" "$OPENCODE_DIR/command/$filename"
fi
```
The `dir` variable is extracted using `dirname "$file"`, but if there are ever nested directories beyond `plugin/lib`, this approach would fail. However, given the current file structure, this works correctly. This is a minor structural concern, not a bug.
---
### No Issues Found With
- **Type definitions** in `types.ts` are well-structured
- **Module boundaries** are clean and logical
- **Error handling** is preserved from the original implementation
- **README documentation** updates correctly reflect the new structure
- **Verification step** in `install.sh` correctly includes all new files
<task_metadata>
session_id: ses_4e8429d4fffeJn2NqyEoouLMB0
</task_metadata>
## Code Review Results
The reviewer found 2 issues worth addressing:
### 1. Bug: Unused type imports in analyzer.ts (Low severity)
Lines 1-13 have redundant imports - the type guards are imported both as types AND as values:
import type {
// ...
isToolPart, // <-- Unused, imported as type
isReasoningPart, // <-- Unused, imported as type
isTextPart, // <-- Unused, imported as type
} from "./types"
import { isToolPart as toolGuard, ... } from "./types" // <-- Actually used### 2. Duplicated pricing logic in subagent.ts (Low severity, code quality)
The SubagentAnalyzer.getPricing() method duplicates the same logic as CostCalculator.getPricing(). If pricing logic changes, it needs updating in two places.
---
Would you like me to fix these issues? They're both minor but cleaning them up would improve the code quality.