# SpiderLoader

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/document_loaders/spider/SpiderLoader)

Load web pages as Documents using Spider AI.

Must have the Python package `spider-client` installed and a Spider API key.
See https://spider.cloud for more.

## Signature

```python
SpiderLoader(
    self,
    url: str,
    *,
    api_key: Optional[str] = None,
    mode: Literal['scrape', 'crawl'] = 'scrape',
    params: Optional[dict] = None,
)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `url` | `str` | Yes | The URL to be processed. |
| `api_key` | `Optional[str]` | No | The Spider API key. If not specified, will be read from env (default: `None`) |
| `mode` | `Literal['scrape', 'crawl']` | No | The mode to run the loader in. Default is "scrape".  Options include "scrape" (single page) and "crawl" (with deeper  crawling following subpages). (default: `'scrape'`) |
| `params` | `Optional[dict]` | No | Additional parameters for the Spider API. (default: `None`) |

## Extends

- `BaseLoader`

## Constructors

```python
__init__(
    self,
    url: str,
    *,
    api_key: Optional[str] = None,
    mode: Literal['scrape', 'crawl'] = 'scrape',
    params: Optional[dict] = None,
)
```

| Name | Type |
|------|------|
| `url` | `str` |
| `api_key` | `Optional[str]` |
| `mode` | `Literal['scrape', 'crawl']` |
| `params` | `Optional[dict]` |


## Properties

- `spider`
- `url`
- `mode`
- `params`

## Methods

- [`lazy_load()`](https://reference.langchain.com/python/langchain-community/document_loaders/spider/SpiderLoader/lazy_load)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/a6a6079511ac8a5c1293337f88096b8641562e77/libs/community/langchain_community/document_loaders/spider.py#L8)