Skip to content

Add usage logs viewer#10

Merged
ScriptSmith merged 7 commits intomainfrom
usage-logs
Mar 15, 2026
Merged

Add usage logs viewer#10
ScriptSmith merged 7 commits intomainfrom
usage-logs

Conversation

@ScriptSmith
Copy link
Owner

No description provided.

@ScriptSmith ScriptSmith changed the title Addd usage logs Add usage logs Mar 14, 2026
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR introduces per-request usage log visibility across the stack: two new backend endpoints (GET /admin/v1/usage/logs and GET /admin/v1/me/usage/logs) plus their export variants, backed by a cursor-paginated list_logs query on both PostgreSQL and SQLite, and a new UsageLogsTable React component that integrates into the admin usage page and the personal "My Usage" page.

Key changes:

  • New UsageLogRecord read model and UsageLogQuery struct cleanly separate the write path (UsageLogEntry) from the read path.
  • Cursor pagination is driven by a new composite (recorded_at DESC, id DESC) index added to both database migration files.
  • scopeToQueryFilters in UsageLogsTable now correctly maps project and team scope types to project_id and team_id query parameters (addressing a gap identified in a previous review).
  • Export supports CSV (csv-export feature-gated) and JSONL, hard-capped at 10,000 rows.

Issues found:

  • The export silently truncates results at 10,000 rows with no response header or body signal, so callers cannot detect an incomplete export.
  • UsageLogCsvRow omits http_referer, image_count, audio_seconds, and character_count fields that are available in the JSON response, making the CSV unsuitable for multimodal cost analysis.
  • In the SQLite list_logs implementation, from/to date filters are explicitly serialized to RFC3339 strings, but the cursor's created_at is bound as a native DateTime<Utc> — an inconsistency that could cause incorrect cursor comparisons depending on sqlx's internal serialization format.

Confidence Score: 3/5

  • Safe to merge with low risk, but silent export truncation and missing CSV fields should be addressed before heavy production use.
  • The core pagination and auth logic is sound. The previously flagged project/team scope gap is resolved. Two issues reduce confidence: the export truncation without any signal to the caller is a correctness/UX concern that could cause data integrity problems for users relying on the export, and the SQLite DateTime binding inconsistency is a latent bug that may not surface immediately but could cause subtle pagination errors in SQLite deployments.
  • src/routes/admin/usage.rs (export truncation and CSV missing fields) and src/db/sqlite/usage.rs (DateTime binding inconsistency in cursor).

Important Files Changed

Filename Overview
src/routes/admin/usage.rs Adds four new endpoints (list/export for admin and me); export silently truncates at 10,000 records with no indication to callers, and the CSV row struct omits several fields present in the JSON response.
src/db/sqlite/usage.rs Implements cursor-paginated list_logs for SQLite; from/to filters serialize to RFC3339 strings but the cursor created_at is bound natively as DateTime<Utc>, creating an inconsistency that could produce incorrect pagination results.
src/db/postgres/usage.rs Implements cursor-paginated list_logs for PostgreSQL using ROW(...) comparisons; dynamic SQL construction and parameter binding order are consistent and correct.
src/db/repos/usage.rs Adds UsageLogQuery struct and list_logs trait method; clean definition with sensible field types.
src/models/usage.rs Introduces UsageLogRecord read model, clearly distinct from the existing UsageLogEntry write model.
ui/src/components/UsageLogs/UsageLogsTable.tsx New component with filterable table, cursor pagination, and export; project and team scope types are now correctly mapped to project_id/team_id query filters.
ui/src/pages/admin/UsagePage.tsx Adds a "Logs" tab alongside analytics; scope construction includes projectId and teamId for correct downstream filtering.
migrations_sqlx/postgres/20250101000000_initial.sql Adds composite (recorded_at DESC, id DESC) index to support efficient cursor pagination; matches the query's ORDER BY clause.
migrations_sqlx/sqlite/20250101000000_initial.sql Same composite index addition as the Postgres migration; consistent across both backends.

Sequence Diagram

sequenceDiagram
    participant UI as UsageLogsTable
    participant API as Axum Router
    participant Svc as UsageService
    participant DB as UsageRepo (PG/SQLite)

    UI->>API: GET /admin/v1/usage/logs?limit=50&cursor=...
    API->>API: authz.require("usage","list")
    API->>API: clamp limit (max 1000)
    API->>Svc: list_logs(UsageLogQuery)
    Svc->>DB: list_logs(query)
    DB->>DB: build WHERE clause + cursor condition
    DB->>DB: SELECT ... ORDER BY recorded_at DESC, id DESC LIMIT 51
    DB-->>Svc: ListResult { items, has_more, cursors }
    Svc-->>API: ListResult
    API-->>UI: UsageLogListResponse { data[], pagination }

    UI->>API: GET /admin/v1/usage/logs/export?format=csv
    API->>API: authz.require("usage","list")
    API->>Svc: list_logs(limit=10_000)
    Svc->>DB: list_logs(query)
    DB-->>Svc: ListResult (possibly truncated at 10k)
    Svc-->>API: ListResult
    API->>API: build_export_response(items, Csv|Jsonl)
    API-->>UI: Content-Disposition: attachment (no truncation signal)
Loading

Comments Outside Diff (3)

  1. src/routes/admin/usage.rs, line 976-1057 (link)

    Export truncation is silent with no indication to callers

    into_export_query() hard-caps the limit at 10_000, but the response gives no indication that results were truncated. If more than 10,000 records match the filter, the caller receives a silent partial export — there is no has_more field, no X-Truncated response header, and no warning in either the CSV or JSONL output.

    A user downloading their logs to audit costs could receive an incomplete file without realizing it, leading to incorrect conclusions.

    Consider returning a Hadrian-Truncated: true response header (or equivalent) when result.has_more is true after fetching 10_001 rows, e.g.:

    let has_more = result.has_more; // from ListResult
    let mut response = build_export_response(result.items, format)?;
    if has_more {
        response.headers_mut().insert(
            "X-Export-Truncated",
            axum::http::HeaderValue::from_static("true"),
        );
    }
    Ok(response)
  2. src/routes/admin/usage.rs, line 950-974 (link)

    CSV export omits multimodal and referer fields

    UsageLogCsvRow is missing http_referer, image_count, audio_seconds, and character_count — all of which are present in UsageLogResponse. For deployments that log image or audio requests, the exported CSV will silently drop cost-relevant token counts and lose the referer attribution entirely, making the export unsuitable for full cost reconciliation.

    Consider adding the missing fields:

    struct UsageLogCsvRow {
        // ...existing fields...
        http_referer: String,
        image_count: String,
        audio_seconds: String,
        character_count: String,
    }

    And populate them in the conversion:

    http_referer: resp.http_referer.unwrap_or_default(),
    image_count: resp.image_count.map(|v| v.to_string()).unwrap_or_default(),
    audio_seconds: resp.audio_seconds.map(|v| v.to_string()).unwrap_or_default(),
    character_count: resp.character_count.map(|v| v.to_string()).unwrap_or_default(),
  3. src/db/sqlite/usage.rs, line 394-445 (link)

    Inconsistent DateTime binding between date filters and cursor

    The from and to date filter params are explicitly serialized to RFC3339 strings before being pushed into the Vec<String> params vector:

    params.push(from.to_rfc3339());

    But the cursor's created_at is bound directly as a native DateTime<Utc>:

    qb = qb.bind(c.created_at).bind(c.id.to_string());

    SQLite has no native datetime type; both values end up as TEXT. If sqlx's internal DateTime<Utc> serializer produces a format that differs from what to_rfc3339() produces (e.g. space separator vs T, fractional seconds precision, or timezone suffix), the cursor comparison (recorded_at, id) {} (?, ?) could silently produce wrong results — either skipping records or returning duplicates across pages.

    For consistency and safety, serialize c.created_at the same way:

Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/routes/admin/usage.rs
Line: 976-1057

Comment:
**Export truncation is silent with no indication to callers**

`into_export_query()` hard-caps the limit at `10_000`, but the response gives no indication that results were truncated. If more than 10,000 records match the filter, the caller receives a silent partial export — there is no `has_more` field, no `X-Truncated` response header, and no warning in either the CSV or JSONL output.

A user downloading their logs to audit costs could receive an incomplete file without realizing it, leading to incorrect conclusions.

Consider returning a `Hadrian-Truncated: true` response header (or equivalent) when `result.has_more` is true after fetching `10_001` rows, e.g.:

```rust
let has_more = result.has_more; // from ListResult
let mut response = build_export_response(result.items, format)?;
if has_more {
    response.headers_mut().insert(
        "X-Export-Truncated",
        axum::http::HeaderValue::from_static("true"),
    );
}
Ok(response)
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/routes/admin/usage.rs
Line: 950-974

Comment:
**CSV export omits multimodal and referer fields**

`UsageLogCsvRow` is missing `http_referer`, `image_count`, `audio_seconds`, and `character_count` — all of which are present in `UsageLogResponse`. For deployments that log image or audio requests, the exported CSV will silently drop cost-relevant token counts and lose the referer attribution entirely, making the export unsuitable for full cost reconciliation.

Consider adding the missing fields:

```rust
struct UsageLogCsvRow {
    // ...existing fields...
    http_referer: String,
    image_count: String,
    audio_seconds: String,
    character_count: String,
}
```

And populate them in the conversion:
```rust
http_referer: resp.http_referer.unwrap_or_default(),
image_count: resp.image_count.map(|v| v.to_string()).unwrap_or_default(),
audio_seconds: resp.audio_seconds.map(|v| v.to_string()).unwrap_or_default(),
character_count: resp.character_count.map(|v| v.to_string()).unwrap_or_default(),
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/db/sqlite/usage.rs
Line: 394-445

Comment:
**Inconsistent `DateTime` binding between date filters and cursor**

The `from` and `to` date filter params are explicitly serialized to RFC3339 strings before being pushed into the `Vec<String>` params vector:
```rust
params.push(from.to_rfc3339());
```

But the cursor's `created_at` is bound directly as a native `DateTime<Utc>`:
```rust
qb = qb.bind(c.created_at).bind(c.id.to_string());
```

SQLite has no native datetime type; both values end up as TEXT. If sqlx's internal `DateTime<Utc>` serializer produces a format that differs from what `to_rfc3339()` produces (e.g. space separator vs `T`, fractional seconds precision, or timezone suffix), the cursor comparison `(recorded_at, id) {} (?, ?)` could silently produce wrong results — either skipping records or returning duplicates across pages.

For consistency and safety, serialize `c.created_at` the same way:

```suggestion
        if let Some(ref c) = cursor {
            qb = qb.bind(c.created_at.to_rfc3339()).bind(c.id.to_string());
        }
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 7273371

@ScriptSmith
Copy link
Owner Author

@greptile-apps

@ScriptSmith ScriptSmith changed the title Add usage logs Add usage logs viewer Mar 14, 2026
@ScriptSmith ScriptSmith merged commit f709e1b into main Mar 15, 2026
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant