Pulled ALL skills from 15 source repositories: - anthropics/skills: 16 (docs, design, MCP, testing) - obra/superpowers: 14 (TDD, debugging, agents, planning) - coreyhaines31/marketingskills: 25 (marketing, CRO, SEO, growth) - better-auth/skills: 5 (auth patterns) - vercel-labs/agent-skills: 5 (React, design, Vercel) - antfu/skills: 16 (Vue, Vite, Vitest, pnpm, Turborepo) - Plus 13 individual skills from various repos Mosaic Stack is not limited to coding — the Orchestrator and subagents serve coding, business, design, marketing, writing, logistics, analysis, and more. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2.3 KiB
2.3 KiB
category
| category |
|---|
| Sensors |
useSpeechRecognition
Reactive SpeechRecognition.
Usage
import { useSpeechRecognition } from '@vueuse/core'
const {
isSupported,
isListening,
isFinal,
result,
start,
stop,
} = useSpeechRecognition()
Options
The following shows the default values of the options, they will be directly passed to SpeechRecognition API.
import { useSpeechRecognition } from '@vueuse/core'
// ---cut---
useSpeechRecognition({
lang: 'en-US',
interimResults: true,
continuous: true,
})
Type Declarations
export interface UseSpeechRecognitionOptions extends ConfigurableWindow {
/**
* Controls whether continuous results are returned for each recognition, or only a single result.
*
* @default true
*/
continuous?: boolean
/**
* Controls whether interim results should be returned (true) or not (false.) Interim results are results that are not yet final
*
* @default true
*/
interimResults?: boolean
/**
* Language for SpeechRecognition
*
* @default 'en-US'
*/
lang?: MaybeRefOrGetter<string>
/**
* A number representing the maximum returned alternatives for each result.
*
* @see https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition/maxAlternatives
* @default 1
*/
maxAlternatives?: number
}
/**
* Reactive SpeechRecognition.
*
* @see https://vueuse.org/useSpeechRecognition
* @see https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition SpeechRecognition
* @param options
*/
export declare function useSpeechRecognition(
options?: UseSpeechRecognitionOptions,
): {
isSupported: ComputedRef<boolean>
isListening: ShallowRef<boolean, boolean>
isFinal: ShallowRef<boolean, boolean>
recognition: SpeechRecognition | undefined
result: ShallowRef<string, string>
error: ShallowRef<
Error | SpeechRecognitionErrorEvent | undefined,
Error | SpeechRecognitionErrorEvent | undefined
>
toggle: (value?: boolean) => void
start: () => void
stop: () => void
}
export type UseSpeechRecognitionReturn = ReturnType<typeof useSpeechRecognition>