whisper.cpp/examples/whisper.swiftui
Georgi Gerganov ab0a8593c5
whisper.swiftui : add .gitignore
2024-01-04 15:00:27 +02:00
..
whisper.cpp.swift whisper.objc : disable timestamps for real-time transcription 2023-12-08 13:43:37 +02:00
whisper.swiftui.demo Improve decoding (#291) 2023-01-15 11:29:57 +02:00
whisper.swiftui.xcodeproj ios : add support for Swift Package Manager (#1370) 2023-11-07 23:53:31 +02:00
.gitignore whisper.swiftui : add .gitignore 2024-01-04 15:00:27 +02:00
README.md whisper.swiftui : update README.md (#682) 2023-03-29 23:04:38 +03:00

README.md

A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.

Usage:

  1. Select a model from the whisper.cpp repository.1
  2. Add the model to whisper.swiftui.demo/Resources/models via Xcode.
  3. Select a sample audio file (for example, jfk.wav).
  4. Add the sample audio file to whisper.swiftui.demo/Resources/samples via Xcode.
  5. Select the "Release" 2 build configuration under "Run", then deploy and run to your device.

Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models is the appropriate directory to place resources whilst whisper.swiftui.demo/Models is related to actual code.

image


  1. I recommend the tiny, base or small models for running on an iOS device. ↩︎

  2. The Release build can boost performance of transcription. In this project, it also added -O3 -DNDEBUG to Other C Flags, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎