# OpenAIWhisperParserLocal

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/document_loaders/parsers/audio/OpenAIWhisperParserLocal)

Transcribe and parse audio files with OpenAI Whisper model.

Audio transcription with OpenAI Whisper model locally from transformers.

## Signature

```python
OpenAIWhisperParserLocal(
    self,
    device: str = '0',
    lang_model: Optional[str] = None,
    batch_size: int = 8,
    chunk_length: int = 30,
    forced_decoder_ids: Optional[Tuple[Dict]] = None,
)
```

## Description

device - device to use
    NOTE: By default uses the gpu if available,
    if you want to use cpu, please set device = "cpu"
lang_model - whisper model to use, for example "openai/whisper-medium"
forced_decoder_ids - id states for decoder in multilanguage model,
    usage example:
    from transformers import WhisperProcessor
    processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
    forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="french",
      task="transcribe")
    forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="french",
    task="translate")

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `device` | `str` | No | device to use. (default: `'0'`) |
| `lang_model` | `Optional[str]` | No | whisper model to use, for example "openai/whisper-medium". Defaults to None. (default: `None`) |
| `forced_decoder_ids` | `Optional[Tuple[Dict]]` | No | id states for decoder in a multilanguage model. Defaults to None. (default: `None`) |
| `batch_size` | `int` | No | batch size used for decoding Defaults to 8. (default: `8`) |
| `chunk_length` | `int` | No | chunk length used during inference. Defaults to 30s. (default: `30`) |

## Extends

- `BaseBlobParser`

## Constructors

```python
__init__(
    self,
    device: str = '0',
    lang_model: Optional[str] = None,
    batch_size: int = 8,
    chunk_length: int = 30,
    forced_decoder_ids: Optional[Tuple[Dict]] = None,
)
```

| Name | Type |
|------|------|
| `device` | `str` |
| `lang_model` | `Optional[str]` |
| `batch_size` | `int` |
| `chunk_length` | `int` |
| `forced_decoder_ids` | `Optional[Tuple[Dict]]` |


## Properties

- `device`
- `lang_model`
- `batch_size`
- `pipe`

## Methods

- [`lazy_parse()`](https://reference.langchain.com/python/langchain-community/document_loaders/parsers/audio/OpenAIWhisperParserLocal/lazy_parse)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/d5ea8358933260ad48dd31f7f8076555c7b4885a/libs/community/langchain_community/document_loaders/parsers/audio.py#L343)