Skip to content

AI chat

LaunchSwift ships an AI chat UI (AIChatView) backed by AIState, with SSE streaming from backend routes.

  • ios/SwiftLaunch/Features/AI/AIState.swift
  • ios/SwiftLaunch/Features/AI/Models/AIModels.swift
  • ios/SwiftLaunch/Features/AI/Services/AIService.swift
  • ios/SwiftLaunch/Features/AI/Views/AIChatView.swift
  • ios/SwiftLaunch/Features/AI/AIChatAccessPolicy.swift
  • backend/src/routes/chat.ts
  • backend/src/routes/ai-proxy.ts
  • backend/src/services/ai-proxy.ts
  • backend/src/services/chat-store.ts
enum AIModel: String, CaseIterable, Sendable {
case gpt4 = "gpt-5-mini"
case claude = "claude-sonnet-4-6"
case gemini = "gemini-3-flash"
case workersAI = "@cf/meta/llama-4-scout-17b-16e-instruct"
}

AIState defaults to .workersAI.

@MainActor
@Observable
final class AIState {
var conversations: [ConversationSummary] = []
var messages: [ChatMessage] = []
var selectedConversationID: String?
var selectedModel: AIModel = .workersAI
var isLoading = false
var isStreaming = false
var error: AIError?
func loadConversations() async
func loadConversation(id: String) async
func sendMessage(content: String)
func retryLastAction()
func cancelStream()
}

Streaming protocol used by /api/chat routes

Section titled “Streaming protocol used by /api/chat routes”

chat.ts emits SSE events:

EventData
conversation{ id, title }
token{ text }
done{ conversationId }
error{ message }

AIState consumes these events and updates the assistant message as tokens arrive.

backend/src/index.ts mounts:

  • app.route("/api/chat", chatRoutes)
  • app.route("/api/ai", aiProxyRoutes)
MethodPathAuth requiredRate limited
POST/api/chatYes (protectedRoute)No
POST/api/chat/:id/messageYesNo
GET/api/chat/historyYesNo
GET/api/chat/:idYesNo
DELETE/api/chat/:idYesNo
POST/api/ai/chatYesYes (createRateLimit)
{
"provider": "workers-ai",
"model": "@cf/meta/llama-4-scout-17b-16e-instruct",
"message": "Explain async/await in Swift",
"systemPrompt": null
}

Uses unified chat payload with messages array:

{
"provider": "openai",
"model": "gpt-5-mini",
"stream": true,
"messages": [
{ "role": "user", "content": "Hello" }
]
}

Configured in backend/src/services/ai-proxy.ts:

  • openaiOPENAI_API_KEY
  • anthropicANTHROPIC_API_KEY
  • geminiGEMINI_API_KEY
  • workers-ai → Cloudflare AI binding (AI) via WorkersAIProvider

Conversation and message persistence for /api/chat* is handled by chat-store.ts:

  • uses D1 when available
  • falls back to in-memory store otherwise (e.g. test/non-D1 environments)