Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,14 @@ browser acts as the runtime host for render, lint, and typecheck flows.

- GitHub PAT setup and usage: [docs/byot.md](docs/byot.md)

## Fine-Grained PAT Quick Setup

For AI/BYOT flows, use a fine-grained GitHub PAT and follow the existing setup guide:

- Full setup and behavior: [docs/byot.md](docs/byot.md)
- Repository permissions screenshot: [docs/media/byot-repo-perms.png](docs/media/byot-repo-perms.png)
- Models permission screenshot: [docs/media/byot-model-perms.png](docs/media/byot-model-perms.png)

## License

MIT
17 changes: 9 additions & 8 deletions docs/next-steps.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,19 @@ Focused follow-up work for `@knighted/develop`.
- Suggested implementation prompt:
- "Add a deterministic E2E execution mode for `@knighted/develop` that serves pinned runtime artifacts locally (instead of live CDN fetches) and wire it into CI as a required check on every PR. Keep a separate lightweight CDN-smoke E2E check for real-network coverage. Validate with `npm run lint`, deterministic Playwright PR checks, and one CDN-smoke Playwright run."

4. **Issue #18 continuation (resume from Phase 2)**
- Continue the GitHub AI assistant rollout after completed Phases 0-1:
4. **Issue #18 continuation (resume from Phase 3)**
- Current rollout status:
- Phase 0 complete: feature flag + scaffolding.
- Phase 1 complete: BYOT token flow, localStorage persistence, writable repo discovery/filtering.
- Implement the next slice first:
- Phase 2: chat drawer UX with streaming responses first, plus non-streaming fallback.
- Add selected repository state plumbing now so Phase 4 (PR write flow) can reuse it.
- Add README documentation for fine-grained PAT setup (reuse existing screenshots referenced in docs/byot.md).
- Phase 2 complete: separate AI chat drawer UX, streaming-first responses with non-stream fallback, selected repository context plumbing, and README fine-grained PAT setup links.
- Implement the next slice first (Phase 3):
- Add mode-aware recommendation behavior so the assistant strongly adapts suggestions to current render mode and style mode.
- Add an editor update workflow where the assistant can propose structured edits and the user can apply to Component and Styles editors with explicit confirmation.
- Add filename groundwork for upcoming PR flows by allowing user-defined Component and Styles file names, persisted per selected repository.
- Keep behavior and constraints aligned with current implementation:
- Keep everything behind the existing browser-only AI feature flag.
- Preserve BYOT token semantics (localStorage persistence until user deletes).
- Keep CDN-first runtime behavior and existing fallback model.
- Do not add dependencies without explicit approval.
- Suggested implementation prompt:
- "Continue Issue #18 in @knighted/develop from the current Phase 1 baseline. Implement Phase 2 by adding a separate AI chat drawer with streaming response rendering (primary) and a non-streaming fallback path. Wire selected repository state as shared app state for upcoming Phase 4 PR actions. Update README with a concise fine-grained PAT setup section that links to existing BYOT screenshot assets/docs. Keep all AI/BYOT UI and behavior behind the existing browser-only feature flag, preserve current token persistence and repo filtering behavior, and validate with npm run lint plus targeted Playwright coverage for chat drawer visibility, streaming/fallback behavior, and repo-context selection plumbing."
- Phase 3 mini-spec (agent implementation prompt):
- "Continue Issue #18 in @knighted/develop from the current Phase 2 baseline. Implement Phase 3 with three deliverables. (1) Add mode-aware assistant guidance: when collecting AI context, include explicit policy hints derived from render mode and style mode, and ensure recommendations avoid incompatible patterns (for example, avoid React hook/state guidance in DOM mode unless user explicitly asks for React migration). (2) Add assistant-to-editor apply flow: support structured assistant responses that can propose edits for component and/or styles editors; render these as reviewable actions in the chat drawer, require explicit user confirmation to apply, and support a one-step undo for last applied assistant edit per editor. (3) Add PR-prep filename metadata: introduce user-editable fields for Component filename and Styles filename in AI controls, validate simple safe filename format, and persist/reload values scoped to selected repository so Phase 4 PR write flow can reuse them. Keep all AI/BYOT behavior behind the existing browser-only AI feature flag and preserve current token/repo persistence semantics. Do not add dependencies. Validate with npm run lint and targeted Playwright tests covering: mode-aware recommendation constraints, apply/undo editor actions, and repository-scoped filename persistence."
236 changes: 236 additions & 0 deletions playwright/app.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,16 @@ import type { Page } from '@playwright/test'
const webServerMode = process.env.PLAYWRIGHT_WEB_SERVER_MODE ?? 'dev'
const appEntryPath = webServerMode === 'preview' ? '/index.html' : '/src/index.html'

type ChatRequestMessage = {
role?: string
content?: string
}

type ChatRequestBody = {
metadata?: unknown
messages?: ChatRequestMessage[]
}

const waitForAppReady = async (page: Page, path = appEntryPath) => {
await page.goto(path)
await expect(page.getByRole('heading', { name: '@knighted/develop' })).toBeVisible()
Expand Down Expand Up @@ -100,6 +110,42 @@ const ensureDiagnosticsDrawerClosed = async (page: Page) => {
await expect(page.locator('#diagnostics-drawer')).toBeHidden()
}

const ensureAiChatDrawerOpen = async (page: Page) => {
const toggle = page.locator('#ai-chat-toggle')
const isExpanded = await toggle.getAttribute('aria-expanded')

if (isExpanded !== 'true') {
await toggle.click()
}

await expect(page.locator('#ai-chat-drawer')).toBeVisible()
}

const connectByotWithSingleRepo = async (page: Page) => {
await page.route('https://api.github.com/user/repos**', async route => {
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify([
{
id: 11,
owner: { login: 'knightedcodemonkey' },
name: 'develop',
full_name: 'knightedcodemonkey/develop',
default_branch: 'main',
permissions: { push: true },
},
]),
})
})

await page.locator('#github-token-input').fill('github_pat_fake_chat_1234567890')
await page.locator('#github-token-add').click()
await expect(page.locator('#github-repo-select')).toHaveValue(
'knightedcodemonkey/develop',
)
}

const expectCollapseButtonState = async (
page: Page,
panelName: 'component' | 'styles' | 'preview',
Expand Down Expand Up @@ -136,6 +182,8 @@ test('BYOT controls stay hidden when feature flag is disabled', async ({ page })
const byotControls = page.locator('#github-ai-controls')
await expect(byotControls).toHaveAttribute('hidden', '')
await expect(byotControls).toBeHidden()
await expect(page.locator('#ai-chat-toggle')).toBeHidden()
await expect(page.locator('#ai-chat-drawer')).toBeHidden()
})

test('BYOT controls render when feature flag is enabled by query param', async ({
Expand All @@ -147,6 +195,194 @@ test('BYOT controls render when feature flag is enabled by query param', async (
await expect(byotControls).toBeVisible()
await expect(page.locator('#github-token-input')).toBeVisible()
await expect(page.locator('#github-token-add')).toBeVisible()
await expect(page.locator('#github-ai-controls #ai-chat-toggle')).toBeHidden()
})

test('AI chat drawer opens and closes when feature flag is enabled', async ({ page }) => {
await waitForAppReady(page, `${appEntryPath}?feature-ai=true`)
await connectByotWithSingleRepo(page)

const chatToggle = page.locator('#ai-chat-toggle')
const chatDrawer = page.locator('#ai-chat-drawer')

await expect(chatToggle).toBeVisible()
await expect(chatToggle).toHaveAttribute('aria-expanded', 'false')

await chatToggle.click()
await expect(chatDrawer).toBeVisible()
await expect(chatToggle).toHaveAttribute('aria-expanded', 'true')

await page.locator('#ai-chat-close').click()
await expect(chatDrawer).toBeHidden()
await expect(chatToggle).toHaveAttribute('aria-expanded', 'false')
})

test('AI chat prefers streaming responses when available', async ({ page }) => {
let streamRequestBody: ChatRequestBody | undefined

await page.route('https://models.github.ai/inference/chat/completions', async route => {
streamRequestBody = route.request().postDataJSON() as ChatRequestBody

await route.fulfill({
status: 200,
contentType: 'text/event-stream',
body: [
'data: {"choices":[{"delta":{"content":"Streaming "}}]}',
'',
'data: {"choices":[{"delta":{"content":"response ready"}}]}',
'',
'data: [DONE]',
'',
].join('\n'),
})
})

await waitForAppReady(page, `${appEntryPath}?feature-ai=true`)
await connectByotWithSingleRepo(page)
await ensureAiChatDrawerOpen(page)

await page.locator('#ai-chat-prompt').fill('Summarize this repository.')
await page.locator('#ai-chat-send').click()

await expect(page.locator('#ai-chat-status')).toHaveText(
'Response streamed from GitHub.',
)
await expect(page.locator('#ai-chat-rate')).toHaveText('Rate limit info unavailable')
await expect(page.locator('#ai-chat-messages')).toContainText(
'Summarize this repository.',
)
await expect(page.locator('#ai-chat-messages')).toContainText(
'Streaming response ready',
)

expect(streamRequestBody?.metadata).toBeUndefined()
const systemMessage = streamRequestBody?.messages?.find(
(message: ChatRequestMessage) => message.role === 'system',
)
const systemMessages = streamRequestBody?.messages?.filter(
(message: ChatRequestMessage) => message.role === 'system',
)
expect(systemMessage?.content).toContain('Selected repository context')
expect(systemMessage?.content).toContain('Repository: knightedcodemonkey/develop')
expect(systemMessage?.content).toContain(
'Repository URL: https://github.com/knightedcodemonkey/develop',
)
expect(
systemMessages?.some((message: ChatRequestMessage) =>
message.content?.includes('Editor context:'),
),
).toBe(true)
})

test('AI chat can disable editor context payload via checkbox', async ({ page }) => {
let streamRequestBody: ChatRequestBody | undefined

await page.route('https://models.github.ai/inference/chat/completions', async route => {
streamRequestBody = route.request().postDataJSON() as ChatRequestBody

await route.fulfill({
status: 200,
contentType: 'text/event-stream',
body: [
'data: {"choices":[{"delta":{"content":"ok"}}]}',
'',
'data: [DONE]',
'',
].join('\n'),
})
})

await waitForAppReady(page, `${appEntryPath}?feature-ai=true`)
await connectByotWithSingleRepo(page)
await ensureAiChatDrawerOpen(page)

const includeEditorsToggle = page.locator('#ai-chat-include-editors')
await expect(includeEditorsToggle).toBeChecked()
await includeEditorsToggle.uncheck()

await page.locator('#ai-chat-prompt').fill('No editor source this time.')
await page.locator('#ai-chat-send').click()
await expect(page.locator('#ai-chat-status')).toHaveText(
'Response streamed from GitHub.',
)
await expect(page.locator('#ai-chat-rate')).toHaveText('Rate limit info unavailable')

expect(streamRequestBody?.metadata).toBeUndefined()
const systemMessages = streamRequestBody?.messages?.filter(
(message: ChatRequestMessage) => message.role === 'system',
)
expect(
systemMessages?.some((message: ChatRequestMessage) =>
message.content?.includes('Selected repository context'),
),
).toBe(true)
expect(
systemMessages?.some((message: ChatRequestMessage) =>
message.content?.includes(
'Repository URL: https://github.com/knightedcodemonkey/develop',
),
),
).toBe(true)
expect(
systemMessages?.some((message: ChatRequestMessage) =>
message.content?.includes('Editor context:'),
),
).toBe(false)
})

test('AI chat falls back to non-streaming response when streaming fails', async ({
page,
}) => {
let streamAttemptCount = 0
let fallbackAttemptCount = 0

await page.route('https://models.github.ai/inference/chat/completions', async route => {
const body = route.request().postDataJSON() as { stream?: boolean } | null
if (body?.stream) {
streamAttemptCount += 1
await route.fulfill({
status: 502,
contentType: 'application/json',
body: JSON.stringify({ message: 'stream failed' }),
})
return
}

fallbackAttemptCount += 1
await route.fulfill({
status: 200,
contentType: 'application/json',
body: JSON.stringify({
rate_limit: {
remaining: 17,
reset: 1704067200,
},
choices: [
{
message: {
role: 'assistant',
content: 'Fallback response from JSON path.',
},
},
],
}),
})
})

await waitForAppReady(page, `${appEntryPath}?feature-ai=true`)
await connectByotWithSingleRepo(page)
await ensureAiChatDrawerOpen(page)

await page.locator('#ai-chat-prompt').fill('Use fallback path.')
await page.locator('#ai-chat-send').click()

await expect(page.locator('#ai-chat-status')).toHaveText('Fallback response loaded.')
await expect(page.locator('#ai-chat-rate')).toHaveText('Remaining 17, resets 00:00 UTC')
await expect(page.locator('#ai-chat-messages')).toContainText(
'Fallback response from JSON path.',
)
expect(streamAttemptCount).toBeGreaterThan(0)
expect(fallbackAttemptCount).toBeGreaterThan(0)
})

test('BYOT remembers selected repository across reloads', async ({ page }) => {
Expand Down
Loading