# ScrapingAntLoader

> **Class** in `langchain_community`

📖 [View in docs](https://reference.langchain.com/python/langchain-community/document_loaders/scrapingant/ScrapingAntLoader)

Turn an url to LLM accessible markdown with `ScrapingAnt`.

For further details, visit: https://docs.scrapingant.com/python-client

## Signature

```python
ScrapingAntLoader(
    self,
    urls: List[str],
    *,
    api_key: Optional[str] = None,
    scrape_config: Optional[dict] = None,
    continue_on_failure: bool = True,
)
```

## Parameters

| Name | Type | Required | Description |
|------|------|----------|-------------|
| `urls` | `List[str]` | Yes | List of urls to scrape. |
| `api_key` | `Optional[str]` | No | The ScrapingAnt API key. If not specified must have env var SCRAPINGANT_API_KEY set. (default: `None`) |
| `scrape_config` | `Optional[dict]` | No | The scraping config from ScrapingAntClient.markdown_request (default: `None`) |
| `continue_on_failure` | `bool` | No | Whether to continue if scraping an url fails. (default: `True`) |

## Extends

- `BaseLoader`

## Constructors

```python
__init__(
    self,
    urls: List[str],
    *,
    api_key: Optional[str] = None,
    scrape_config: Optional[dict] = None,
    continue_on_failure: bool = True,
) -> None
```

| Name | Type |
|------|------|
| `urls` | `List[str]` |
| `api_key` | `Optional[str]` |
| `scrape_config` | `Optional[dict]` |
| `continue_on_failure` | `bool` |


## Properties

- `client`
- `urls`
- `scrape_config`
- `continue_on_failure`

## Methods

- [`lazy_load()`](https://reference.langchain.com/python/langchain-community/document_loaders/scrapingant/ScrapingAntLoader/lazy_load)

---

[View source on GitHub](https://github.com/langchain-ai/langchain-community/blob/4b280287bd55b99b44db2dd849f02d66c89534d5/libs/community/langchain_community/document_loaders/scrapingant.py#L13)