AI chat
LaunchSwift ships an AI chat UI (AIChatView) backed by AIState, with SSE streaming from backend routes.
ios/SwiftLaunch/Features/AI/AIState.swiftios/SwiftLaunch/Features/AI/Models/AIModels.swiftios/SwiftLaunch/Features/AI/Services/AIService.swiftios/SwiftLaunch/Features/AI/Views/AIChatView.swiftios/SwiftLaunch/Features/AI/AIChatAccessPolicy.swift
Backend
Section titled “Backend”backend/src/routes/chat.tsbackend/src/routes/ai-proxy.tsbackend/src/services/ai-proxy.tsbackend/src/services/chat-store.ts
AIModel enum (current source)
Section titled “AIModel enum (current source)”enum AIModel: String, CaseIterable, Sendable { case gpt4 = "gpt-5-mini" case claude = "claude-sonnet-4-6" case gemini = "gemini-3-flash" case workersAI = "@cf/meta/llama-4-scout-17b-16e-instruct"}AIState defaults to .workersAI.
AIState API
Section titled “AIState API”@MainActor@Observablefinal class AIState { var conversations: [ConversationSummary] = [] var messages: [ChatMessage] = [] var selectedConversationID: String? var selectedModel: AIModel = .workersAI var isLoading = false var isStreaming = false var error: AIError?
func loadConversations() async func loadConversation(id: String) async func sendMessage(content: String) func retryLastAction() func cancelStream()}Streaming protocol used by /api/chat routes
Section titled “Streaming protocol used by /api/chat routes”chat.ts emits SSE events:
| Event | Data |
|---|---|
conversation | { id, title } |
token | { text } |
done | { conversationId } |
error | { message } |
AIState consumes these events and updates the assistant message as tokens arrive.
Backend routes and behavior
Section titled “Backend routes and behavior”backend/src/index.ts mounts:
app.route("/api/chat", chatRoutes)app.route("/api/ai", aiProxyRoutes)
Effective endpoints
Section titled “Effective endpoints”| Method | Path | Auth required | Rate limited |
|---|---|---|---|
POST | /api/chat | Yes (protectedRoute) | No |
POST | /api/chat/:id/message | Yes | No |
GET | /api/chat/history | Yes | No |
GET | /api/chat/:id | Yes | No |
DELETE | /api/chat/:id | Yes | No |
POST | /api/ai/chat | Yes | Yes (createRateLimit) |
Request payloads
Section titled “Request payloads”/api/chat and /api/chat/:id/message
Section titled “/api/chat and /api/chat/:id/message”{ "provider": "workers-ai", "model": "@cf/meta/llama-4-scout-17b-16e-instruct", "message": "Explain async/await in Swift", "systemPrompt": null}/api/ai/chat
Section titled “/api/ai/chat”Uses unified chat payload with messages array:
{ "provider": "openai", "model": "gpt-5-mini", "stream": true, "messages": [ { "role": "user", "content": "Hello" } ]}Provider configuration
Section titled “Provider configuration”Configured in backend/src/services/ai-proxy.ts:
openai→OPENAI_API_KEYanthropic→ANTHROPIC_API_KEYgemini→GEMINI_API_KEYworkers-ai→ Cloudflare AI binding (AI) viaWorkersAIProvider
Persistence
Section titled “Persistence”Conversation and message persistence for /api/chat* is handled by chat-store.ts:
- uses D1 when available
- falls back to in-memory store otherwise (e.g. test/non-D1 environments)