# TensorflowDatasetLoader

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/document_loaders/tensorflow_datasets/TensorflowDatasetLoader)

Load from `TensorFlow Dataset`.

## Signature

```python
TensorflowDatasetLoader(
    self,
    dataset_name: str,
    split_name: str,
    load_max_docs: Optional[int] = 100,
    sample_to_document_function: Optional[Callable[[Dict], Document]] = None,
)
```

## Description

**Example:**

.. code-block:: python

from langchain_community.document_loaders import TensorflowDatasetLoader

def mlqaen_example_to_document(example: dict) -> Document:
    return Document(
        page_content=decode_to_str(example["context"]),
        metadata={
            "id": decode_to_str(example["id"]),
            "title": decode_to_str(example["title"]),
            "question": decode_to_str(example["question"]),
            "answer": decode_to_str(example["answers"]["text"][0]),
        },
    )

tsds_client = TensorflowDatasetLoader(
        dataset_name="mlqa/en",
        split_name="test",
        load_max_docs=100,
        sample_to_document_function=mlqaen_example_to_document,
    )

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `dataset_name` | `str` | Yes | the name of the dataset to load |
| `split_name` | `str` | Yes | the name of the split to load. |
| `load_max_docs` | `Optional[int]` | No | a limit to the number of loaded documents. Defaults to 100. (default: `100`) |
| `sample_to_document_function` | `Optional[Callable[[Dict], Document]]` | No | a function that converts a dataset sample into a Document. (default: `None`) |

## Extends

- `BaseLoader`

## Constructors

```python
__init__(
    self,
    dataset_name: str,
    split_name: str,
    load_max_docs: Optional[int] = 100,
    sample_to_document_function: Optional[Callable[[Dict], Document]] = None,
)
```

| Name | Type |
|------|------|
| `dataset_name` | `str` |
| `split_name` | `str` |
| `load_max_docs` | `Optional[int]` |
| `sample_to_document_function` | `Optional[Callable[[Dict], Document]]` |


## Properties

- `dataset_name`
- `split_name`
- `load_max_docs`
- `sample_to_document_function`

## Methods

- [`lazy_load()`](https://reference.langchain.com/python/langchain-community/document_loaders/tensorflow_datasets/TensorflowDatasetLoader/lazy_load)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/document_loaders/tensorflow_datasets.py#L9)