id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
154
|
|---|---|---|
f5b014096545-0
|
langchain.document_loaders.pdf.PyPDFium2Loader¶
class langchain.document_loaders.pdf.PyPDFium2Loader(file_path: str)[source]¶
Bases: BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFium2Loader.html
|
2eee89f75308-0
|
langchain.document_loaders.mhtml.MHTMLLoader¶
class langchain.document_loaders.mhtml.MHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – soup kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when getting text from the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mhtml.MHTMLLoader.html
|
ba6c0ef3fd0e-0
|
langchain.document_loaders.parsers.registry.get_parser¶
langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶
Get a parser by parser name.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html
|
4782d4e61c73-0
|
langchain.document_loaders.pdf.OnlinePDFLoader¶
class langchain.document_loaders.pdf.OnlinePDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that loads online PDFs.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.OnlinePDFLoader.html
|
054821ee375e-0
|
langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: Enum
Enumerator of the supported blockchains.
Attributes
ETH_MAINNET
ETH_GOERLI
POLYGON_MAINNET
POLYGON_MUMBAI
ETH_GOERLI = 'eth-goerli'¶
ETH_MAINNET = 'eth-mainnet'¶
POLYGON_MAINNET = 'polygon-mainnet'¶
POLYGON_MUMBAI = 'polygon-mumbai'¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html
|
5ee981f9568c-0
|
langchain.document_loaders.base.BaseBlobParser¶
class langchain.document_loaders.base.BaseBlobParser[source]¶
Bases: ABC
Abstract interface for blob parsers.
A blob parser provides a way to parse raw data stored in a blob into one
or more documents.
The parser can be composed with blob loaders, making it easy to re-use
a parser independent of how the blob was originally loaded.
Methods
__init__()
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
abstract lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document][source]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseBlobParser.html
|
791f17b7bb27-0
|
langchain.document_loaders.apify_dataset.ApifyDatasetLoader¶
class langchain.document_loaders.apify_dataset.ApifyDatasetLoader(dataset_id: str, dataset_mapping_function: Callable[[Dict], Document])[source]¶
Bases: BaseLoader, BaseModel
Loading Documents from Apify datasets.
Initialize the loader with an Apify dataset ID and a mapping function.
Parameters
dataset_id (str) – The ID of the dataset on the Apify platform.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to an instance
of the Document class.
param apify_client: Any = None¶
An instance of the ApifyClient class from the apify-client Python package.
param dataset_id: str [Required]¶
The ID of the dataset on the Apify platform.
param dataset_mapping_function: Callable[[Dict], langchain.schema.document.Document] [Required]¶
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_environment » all fields[source]¶
Validate environment.
Parameters
values – The values to validate.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.apify_dataset.ApifyDatasetLoader.html
|
4eef79132bc6-0
|
langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, load_max_docs: Optional[int] = 100, load_all_available_meta: Optional[bool] = False)[source]¶
Bases: BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Methods
__init__(query[, load_max_docs, ...])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
query
The query to be passed to the arxiv.org API.
load_max_docs
The maximum number of documents to load.
load_all_available_meta
Whether to load all available metadata.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_all_available_meta¶
Whether to load all available metadata.
load_max_docs¶
The maximum number of documents to load.
query¶
The query to be passed to the arxiv.org API.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html
|
036f472b4419-0
|
langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load RST files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html
|
ca0c904e60a5-0
|
langchain.document_loaders.helpers.detect_file_encodings¶
langchain.document_loaders.helpers.detect_file_encodings(file_path: str, timeout: int = 5) → List[FileEncoding][source]¶
Try to detect the file encoding.
Returns a list of FileEncoding tuples with the detected encodings ordered
by confidence.
Parameters
file_path – The path to the file to detect the encoding for.
timeout – The timeout in seconds for the encoding detection.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.helpers.detect_file_encodings.html
|
5e1dd02d87c6-0
|
langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
Bases: BaseLoader
A generic document loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(path=”path/to/directory”,
glob=”**/[!.]*”,
suffixes=[“.pdf”],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
… code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/*.txt”)
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”**/[!.]*”)
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem(“/path/to/dir”, glob=”*”)
Example instantiations to change which parser is used:
… code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
“/path/to/dir”,
glob=”**/*.pdf”,
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
5e1dd02d87c6-1
|
Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, suffixes, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html
|
be8fe256d8ef-0
|
langchain.document_loaders.pdf.PyMuPDFLoader¶
class langchain.document_loaders.pdf.PyMuPDFLoader(file_path: str)[source]¶
Bases: BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load(**kwargs)
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load(**kwargs: Optional[Any]) → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyMuPDFLoader.html
|
73dfd0d1372b-0
|
langchain.document_loaders.airbyte_json.AirbyteJSONLoader¶
class langchain.document_loaders.airbyte_json.AirbyteJSONLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that loads local airbyte json files.
Initialize with a file path. This should start with ‘/tmp/airbyte_local/’.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
file_path
Path to the directory containing the json files.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
file_path¶
Path to the directory containing the json files.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte_json.AirbyteJSONLoader.html
|
13bdea28c7a8-0
|
langchain.document_loaders.notebook.concatenate_cells¶
langchain.document_loaders.notebook.concatenate_cells(cell: dict, include_outputs: bool, max_output_length: int, traceback: bool) → str[source]¶
Combine cells information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.concatenate_cells.html
|
af6c111fc27a-0
|
langchain.document_loaders.url.UnstructuredURLLoader¶
class langchain.document_loaders.url.UnstructuredURLLoader(urls: List[str], continue_on_failure: bool = True, mode: str = 'single', show_progress_bar: bool = False, **unstructured_kwargs: Any)[source]¶
Bases: BaseLoader
Loader that uses unstructured to load HTML files.
Initialize with file path.
Methods
__init__(urls[, continue_on_failure, mode, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url.UnstructuredURLLoader.html
|
68d85d9828db-0
|
langchain.document_loaders.twitter.TwitterTweetLoader¶
class langchain.document_loaders.twitter.TwitterTweetLoader(auth_handler: Union[OAuthHandler, OAuth2BearerHandler], twitter_users: Sequence[str], number_tweets: Optional[int] = 100)[source]¶
Bases: BaseLoader
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Methods
__init__(auth_handler, twitter_users[, ...])
from_bearer_token(oauth2_bearer_token, ...)
Create a TwitterTweetLoader from OAuth2 bearer token.
from_secrets(access_token, ...[, number_tweets])
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load()
A lazy loader for Documents.
load()
Load tweets.
load_and_split([text_splitter])
Load Documents and split into chunks.
classmethod from_bearer_token(oauth2_bearer_token: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from OAuth2 bearer token.
classmethod from_secrets(access_token: str, access_token_secret: str, consumer_key: str, consumer_secret: str, twitter_users: Sequence[str], number_tweets: Optional[int] = 100) → TwitterTweetLoader[source]¶
Create a TwitterTweetLoader from access tokens and secrets.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load tweets.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html
|
68d85d9828db-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.twitter.TwitterTweetLoader.html
|
4f3503f8db4d-0
|
langchain.document_loaders.text.TextLoader¶
class langchain.document_loaders.text.TextLoader(file_path: str, encoding: Optional[str] = None, autodetect_encoding: bool = False)[source]¶
Bases: BaseLoader
Load text files.
Parameters
file_path – Path to the file to load.
encoding – File encoding to use. If None, the file will be loaded
encoding. (with the default system) –
autodetect_encoding – Whether to try to autodetect the file encoding
if the specified encoding fails.
Initialize with file path.
Methods
__init__(file_path[, encoding, ...])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.text.TextLoader.html
|
21585e4db667-0
|
langchain.document_loaders.rtf.UnstructuredRTFLoader¶
class langchain.document_loaders.rtf.UnstructuredRTFLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load rtf files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rtf.UnstructuredRTFLoader.html
|
3023df87bf82-0
|
langchain.document_loaders.github.BaseGitHubLoader¶
class langchain.document_loaders.github.BaseGitHubLoader(*, repo: str, access_token: str)[source]¶
Bases: BaseLoader, BaseModel, ABC
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param repo: str [Required]¶
Name of repository
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_environment » all fields[source]¶
Validate that access token exists in environment.
property headers: Dict[str, str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.BaseGitHubLoader.html
|
1930de7da119-0
|
langchain.document_loaders.url_playwright.PlaywrightURLLoader¶
class langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
headless¶
If True, the browser will run in headless mode.
Type
bool
Load a list of URLs using Playwright and unstructured.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Playwright and unstructured.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Playwright and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html
|
c092f24861c1-0
|
langchain.document_loaders.s3_file.S3FileLoader¶
class langchain.document_loaders.s3_file.S3FileLoader(bucket: str, key: str)[source]¶
Bases: BaseLoader
Loading logic for loading documents from s3.
Initialize with bucket and key name.
Methods
__init__(bucket, key)
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_file.S3FileLoader.html
|
bae9db184a6f-0
|
langchain.document_loaders.open_city_data.OpenCityDataLoader¶
class langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]¶
Bases: BaseLoader
Loader that loads Open city data.
Initialize with dataset_id
Methods
__init__(city_id, dataset_id, limit)
Initialize with dataset_id
lazy_load()
Lazy load records.
load()
Load records.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records.
load() → List[Document][source]¶
Load records.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html
|
9fc11293957a-0
|
langchain.document_loaders.googledrive.GoogleDriveLoader¶
class langchain.document_loaders.googledrive.GoogleDriveLoader(*, service_account_key: Path = PosixPath('/home/docs/.credentials/keys.json'), credentials_path: Path = PosixPath('/home/docs/.credentials/credentials.json'), token_path: Path = PosixPath('/home/docs/.credentials/token.json'), folder_id: Optional[str] = None, document_ids: Optional[List[str]] = None, file_ids: Optional[List[str]] = None, recursive: bool = False, file_types: Optional[Sequence[str]] = None, load_trashed_files: bool = False, file_loader_cls: Any = None, file_loader_kwargs: Dict[str, Any] = {})[source]¶
Bases: BaseLoader, BaseModel
Loads Google Docs from Google Drive.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')¶
Path to the credentials file.
param document_ids: Optional[List[str]] = None¶
The document ids to load from.
param file_ids: Optional[List[str]] = None¶
The file ids to load from.
param file_loader_cls: Any = None¶
The file loader class to use.
param file_loader_kwargs: Dict[str, Any] = {}¶
The file loader kwargs to use.
param file_types: Optional[Sequence[str]] = None¶
The file types to load. Only applies when folder_id is given.
param folder_id: Optional[str] = None¶
The folder id to load from.
param load_trashed_files: bool = False¶
Whether to load trashed files. Only applies when folder_id is given.
param recursive: bool = False¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
|
9fc11293957a-1
|
param recursive: bool = False¶
Whether to load recursively. Only applies when folder_id is given.
param service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')¶
Path to the service account key file.
param token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')¶
Path to the token file.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_credentials_path » credentials_path[source]¶
Validate that credentials_path exists.
validator validate_inputs » all fields[source]¶
Validate that either folder_id or document_ids is set, but not both.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.googledrive.GoogleDriveLoader.html
|
90ab19f31ef6-0
|
langchain.document_loaders.iugu.IuguLoader¶
class langchain.document_loaders.iugu.IuguLoader(resource: str, api_token: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that fetches data from IUGU.
Initialize the IUGU resource.
Parameters
resource – The name of the resource to fetch.
api_token – The IUGU API token to use.
Methods
__init__(resource[, api_token])
Initialize the IUGU resource.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.iugu.IuguLoader.html
|
276ad4de5c17-0
|
langchain.document_loaders.parsers.grobid.ServerUnavailableException¶
class langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]¶
Bases: Exception
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html
|
a9236c5c1fa9-0
|
langchain.document_loaders.srt.SRTLoader¶
class langchain.document_loaders.srt.SRTLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader for .srt (subtitle) files.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load using pysrt file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load using pysrt file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.srt.SRTLoader.html
|
2020d837cba1-0
|
langchain.document_loaders.xml.UnstructuredXMLLoader¶
class langchain.document_loaders.xml.UnstructuredXMLLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load XML files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.xml.UnstructuredXMLLoader.html
|
88c87cd897ec-0
|
langchain.document_loaders.azlyrics.AZLyricsLoader¶
class langchain.document_loaders.azlyrics.AZLyricsLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loader that loads AZLyrics webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages into Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages into Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html
|
88c87cd897ec-1
|
load() → List[Document][source]¶
Load webpages into Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azlyrics.AZLyricsLoader.html
|
61ff40dcbe9d-0
|
langchain.document_loaders.telegram.concatenate_rows¶
langchain.document_loaders.telegram.concatenate_rows(row: dict) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.telegram.concatenate_rows.html
|
4e375b3a768a-0
|
langchain.document_loaders.obsidian.ObsidianLoader¶
class langchain.document_loaders.obsidian.ObsidianLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Bases: BaseLoader
Loader that loads Obsidian files from disk.
Initialize with path.
Methods
__init__(path[, encoding, collect_metadata])
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
FRONT_MATTER_REGEX
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obsidian.ObsidianLoader.html
|
8d448f38dffb-0
|
langchain.document_loaders.acreom.AcreomLoader¶
class langchain.document_loaders.acreom.AcreomLoader(path: str, encoding: str = 'UTF-8', collect_metadata: bool = True)[source]¶
Bases: BaseLoader
Loader that loads acreom vault from a directory.
Methods
__init__(path[, encoding, collect_metadata])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
FRONT_MATTER_REGEX
Regex to match front matter metadata in markdown files.
file_path
Path to the directory containing the markdown files.
encoding
Encoding to use when reading the files.
collect_metadata
Whether to collect metadata from the front matter.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)¶
Regex to match front matter metadata in markdown files.
collect_metadata¶
Whether to collect metadata from the front matter.
encoding¶
Encoding to use when reading the files.
file_path¶
Path to the directory containing the markdown files.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.acreom.AcreomLoader.html
|
5a063aeffa69-0
|
langchain.document_loaders.unstructured.UnstructuredFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredBaseLoader
UnstructuredFileIOLoader uses unstructured to load files. The file loader
uses the unstructured partition function and will automatically detect the file
type. You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
```python
from langchain.document_loaders import UnstructuredFileIOLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileIOLoader(f, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
```
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
Initialize with file path.
Methods
__init__(file[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html
|
5a063aeffa69-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredFileIOLoader.html
|
8a8cc227f8d6-0
|
langchain.document_loaders.facebook_chat.FacebookChatLoader¶
class langchain.document_loaders.facebook_chat.FacebookChatLoader(path: str)[source]¶
Bases: BaseLoader
Loads Facebook messages json directory dump.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.facebook_chat.FacebookChatLoader.html
|
c77f7df6d5b0-0
|
langchain.document_loaders.blob_loaders.schema.Blob¶
class langchain.document_loaders.blob_loaders.schema.Blob(*, data: Optional[Union[bytes, str]] = None, mimetype: Optional[str] = None, encoding: str = 'utf-8', path: Optional[Union[str, PurePath]] = None)[source]¶
Bases: BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param data: Optional[Union[bytes, str]] = None¶
param encoding: str = 'utf-8'¶
param mimetype: Optional[str] = None¶
param path: Optional[Union[str, pathlib.PurePath]] = None¶
as_bytes() → bytes[source]¶
Read data as bytes.
as_bytes_io() → Generator[Union[BytesIO, BufferedReader], None, None][source]¶
Read data as a byte stream.
as_string() → str[source]¶
Read data as a string.
validator check_blob_is_valid » all fields[source]¶
Verify that either data or path is provided.
classmethod from_data(data: Union[str, bytes], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, path: Optional[str] = None) → Blob[source]¶
Initialize the blob from in-memory data.
Parameters
data – the in-memory data associated with the blob
encoding – Encoding to use if decoding the bytes into a string
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
c77f7df6d5b0-1
|
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
path – if provided, will be set as the source from which the data came
Returns
Blob instance
classmethod from_path(path: Union[str, PurePath], *, encoding: str = 'utf-8', mime_type: Optional[str] = None, guess_type: bool = True) → Blob[source]¶
Load the blob from a path like object.
Parameters
path – path like object to file to be read
encoding – Encoding to use if decoding the bytes into a string
mime_type – if provided, will be set as the mime-type of the data
guess_type – If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
property source: Optional[str]¶
The source location of the blob as string if known otherwise none.
model Config[source]¶
Bases: object
arbitrary_types_allowed = True¶
frozen = True¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.Blob.html
|
b2da08d33074-0
|
langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader(*, api: str = 'https://api.docugami.com/v1preview1', access_token: Optional[str] = None, docset_id: Optional[str] = None, document_ids: Optional[Sequence[str]] = None, file_paths: Optional[Sequence[Union[Path, str]]] = None, min_chunk_size: int = 32)[source]¶
Bases: BaseLoader, BaseModel
Loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
The Docugami API access token to use.
param api: str = 'https://api.docugami.com/v1preview1'¶
The Docugami API endpoint to use.
param docset_id: Optional[str] = None¶
The Docugami API docset ID to use.
param document_ids: Optional[Sequence[str]] = None¶
The Docugami API document IDs to use.
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
The local file paths to use.
param min_chunk_size: int = 32¶
The minimum chunk size to use when parsing DGML. Defaults to 32.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
b2da08d33074-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_local_or_remote » all fields[source]¶
Validate that either local file paths are given, or remote API docset ID.
Parameters
values – The values to validate.
Returns
The validated values.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html
|
3f39de8bd21e-0
|
langchain.document_loaders.psychic.PsychicLoader¶
class langchain.document_loaders.psychic.PsychicLoader(api_key: str, account_id: str, connector_id: Optional[str] = None)[source]¶
Bases: BaseLoader
Loader that loads documents from Psychic.dev.
Initialize with API key, connector id, and account id.
Methods
__init__(api_key, account_id[, connector_id])
Initialize with API key, connector id, and account id.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.psychic.PsychicLoader.html
|
ad8ce9606b33-0
|
langchain.document_loaders.ifixit.IFixitLoader¶
class langchain.document_loaders.ifixit.IFixitLoader(web_path: str)[source]¶
Bases: BaseLoader
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&A’s
and wikis from devices on iFixit using their open APIs and web scraping.
Initialize with a web path.
Methods
__init__(web_path)
Initialize with a web path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_device([url_override, include_guides])
Loads a device
load_guide([url_override])
Load a guide
load_questions_and_answers([url_override])
Load a list of questions and answers.
load_suggestions([query, doc_type])
Load suggestions.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
load_device(url_override: Optional[str] = None, include_guides: bool = True) → List[Document][source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
|
ad8ce9606b33-1
|
Loads a device
Parameters
url_override – A URL to override the default URL.
include_guides – Whether to include guides linked to from the device.
Defaults to True.
Returns:
load_guide(url_override: Optional[str] = None) → List[Document][source]¶
Load a guide
Parameters
url_override – A URL to override the default URL.
Returns: List[Document]
load_questions_and_answers(url_override: Optional[str] = None) → List[Document][source]¶
Load a list of questions and answers.
Parameters
url_override – A URL to override the default URL.
Returns: List[Document]
static load_suggestions(query: str = '', doc_type: str = 'all') → List[Document][source]¶
Load suggestions.
Parameters
query – A query string
doc_type – The type of document to search for. Can be one of “all”,
“device”, “guide”, “teardown”, “answer”, “wiki”.
Returns:
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.ifixit.IFixitLoader.html
|
209add47fceb-0
|
langchain.document_loaders.toml.TomlLoader¶
class langchain.document_loaders.toml.TomlLoader(source: Union[str, Path])[source]¶
Bases: BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Initialize the TomlLoader with a source file or directory.
Methods
__init__(source)
Initialize the TomlLoader with a source file or directory.
lazy_load()
Lazily load the TOML documents from the source file or directory.
load()
Load and return all documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazily load the TOML documents from the source file or directory.
load() → List[Document][source]¶
Load and return all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.toml.TomlLoader.html
|
97348a48988c-0
|
langchain.document_loaders.image_captions.ImageCaptionLoader¶
class langchain.document_loaders.image_captions.ImageCaptionLoader(path_images: Union[str, List[str]], blip_processor: str = 'Salesforce/blip-image-captioning-base', blip_model: str = 'Salesforce/blip-image-captioning-base')[source]¶
Bases: BaseLoader
Loads the captions of an image
Initialize with a list of image paths
Parameters
path_images – A list of image paths.
blip_processor – The name of the pre-trained BLIP processor.
blip_model – The name of the pre-trained BLIP model.
Methods
__init__(path_images[, blip_processor, ...])
Initialize with a list of image paths
lazy_load()
A lazy loader for Documents.
load()
Load from a list of image files
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load from a list of image files
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.image_captions.ImageCaptionLoader.html
|
0d8c18f2ed47-0
|
langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter¶
class langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter(code: str)[source]¶
Bases: ABC
The abstract class for the code segmenter.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
abstract extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
abstract simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.code_segmenter.CodeSegmenter.html
|
76db6e8b5f88-0
|
langchain.document_loaders.imsdb.IMSDbLoader¶
class langchain.document_loaders.imsdb.IMSDbLoader(web_path: Union[str, List[str]], header_template: Optional[dict] = None, verify_ssl: Optional[bool] = True, proxies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loads IMSDb webpages.
Initialize with webpage path.
Methods
__init__(web_path[, header_template, ...])
Initialize with webpage path.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpage.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpage.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
76db6e8b5f88-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.imsdb.IMSDbLoader.html
|
2c875d9c0e04-0
|
langchain.document_loaders.brave_search.BraveSearchLoader¶
class langchain.document_loaders.brave_search.BraveSearchLoader(query: str, api_key: str, search_kwargs: Optional[dict] = None)[source]¶
Bases: BaseLoader
Loads a query result from Brave Search engine into a list of Documents.
Initializes the BraveLoader.
Parameters
query – The query to search for.
api_key – The API key to use.
search_kwargs – The search kwargs to use.
Methods
__init__(query, api_key[, search_kwargs])
Initializes the BraveLoader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.brave_search.BraveSearchLoader.html
|
cd8c3396dda6-0
|
langchain.document_loaders.notion.NotionDirectoryLoader¶
class langchain.document_loaders.notion.NotionDirectoryLoader(path: str)[source]¶
Bases: BaseLoader
Loader that loads Notion directory dump.
Initialize with path.
Methods
__init__(path)
Initialize with path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notion.NotionDirectoryLoader.html
|
1cc825995544-0
|
langchain.document_loaders.gutenberg.GutenbergLoader¶
class langchain.document_loaders.gutenberg.GutenbergLoader(file_path: str)[source]¶
Bases: BaseLoader
Loader that uses urllib to load .txt web files.
Initialize with a file path.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gutenberg.GutenbergLoader.html
|
a9f91648d2a9-0
|
langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader¶
class langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path: str, page_content_column: str = 'text', name: Optional[str] = None, data_dir: Optional[str] = None, data_files: Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]] = None, cache_dir: Optional[str] = None, keep_in_memory: Optional[bool] = None, save_infos: bool = False, use_auth_token: Optional[Union[bool, str]] = None, num_proc: Optional[int] = None)[source]¶
Bases: BaseLoader
Load Documents from the Hugging Face Hub.
Initialize the HuggingFaceDatasetLoader.
Parameters
path – Path or name of the dataset.
page_content_column – Page content column name. Default is “text”.
name – Name of the dataset configuration.
data_dir – Data directory of the dataset configuration.
data_files – Path(s) to source data file(s).
cache_dir – Directory to read/write data.
keep_in_memory – Whether to copy the dataset in-memory.
save_infos – Save the dataset information (checksums/size/splits/…).
Default is False.
use_auth_token – Bearer token for remote files on the Dataset Hub.
num_proc – Number of processes.
Methods
__init__(path[, page_content_column, name, ...])
Initialize the HuggingFaceDatasetLoader.
lazy_load()
Load documents lazily.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents lazily.
load() → List[Document][source]¶
Load documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
a9f91648d2a9-1
|
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader.html
|
433b9945f4c2-0
|
langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False)[source]¶
Bases: WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – list of strings or regexes that will be applied to filter the
urls that are parsed and loaded
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
433b9945f4c2-1
|
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_sitemap(soup: Any) → List[dict][source]¶
Parse sitemap xml and load into a list of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
433b9945f4c2-2
|
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html
|
ec77a94b792b-0
|
langchain.document_loaders.spreedly.SpreedlyLoader¶
class langchain.document_loaders.spreedly.SpreedlyLoader(access_token: str, resource: str)[source]¶
Bases: BaseLoader
Loader that fetches data from Spreedly API.
Methods
__init__(access_token, resource)
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.spreedly.SpreedlyLoader.html
|
5033f752659b-0
|
langchain.document_loaders.merge.MergedDataLoader¶
class langchain.document_loaders.merge.MergedDataLoader(loaders: List)[source]¶
Bases: BaseLoader
Merge documents from a list of loaders
Initialize with a list of loaders
Methods
__init__(loaders)
Initialize with a list of loaders
lazy_load()
Lazy load docs from each individual loader.
load()
Load docs.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load docs from each individual loader.
load() → List[Document][source]¶
Load docs.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.merge.MergedDataLoader.html
|
a00ba3474d34-0
|
langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load email files. Works with both
.eml and .msg files. You can process attachments in addition to the
e-mail message itself by passing process_attachments=True into the
constructor for the loader. By default, attachments will be processed
with the unstructured partition function. If you already know the document
types of the attachments, you can specify another partitioning function
with the attachment partitioner kwarg.
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email.eml”, mode=”elements”)
loader.load()
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email-attachment.eml”,
mode=”elements”,
process_attachments=True,
)
loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html
|
514952020e6d-0
|
langchain.document_loaders.embaas.EmbaasLoader¶
class langchain.document_loaders.embaas.EmbaasLoader(*, embaas_api_key: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/', params: EmbaasDocumentExtractionParameters = {}, file_path: str, blob_loader: Optional[EmbaasBlobLoader] = None)[source]¶
Bases: BaseEmbaasLoader, BaseLoader
Embaas’s document loader.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/document/extract-text/bytes/'¶
The URL of the embaas document extraction API.
param blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
514952020e6d-1
|
The blob loader to use. If not provided, a default one will be created.
param embaas_api_key: Optional[str] = None¶
The API key for the embaas document extraction API.
param file_path: str [Required]¶
The path to the file to load.
param params: langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters = {}¶
Additional parameters to pass to the embaas document extraction API.
lazy_load() → Iterator[Document][source]¶
Load the documents from the file path lazily.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
validator validate_blob_loader » blob_loader[source]¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasLoader.html
|
a73b4aedc2b6-0
|
langchain.document_loaders.json_loader.JSONLoader¶
class langchain.document_loaders.json_loader.JSONLoader(file_path: Union[str, Path], jq_schema: str, content_key: Optional[str] = None, metadata_func: Optional[Callable[[Dict, Dict], Dict]] = None, text_content: bool = True, json_lines: bool = False)[source]¶
Bases: BaseLoader
Loads a JSON file using a jq schema.
Example
[{“text”: …}, {“text”: …}, {“text”: …}] -> schema = .[].text
{“key”: [{“text”: …}, {“text”: …}, {“text”: …}]} -> schema = .key[].text
[“”, “”, “”] -> schema = .[]
Initialize the JSONLoader.
Parameters
file_path (Union[str, Path]) – The path to the JSON or JSON Lines file.
jq_schema (str) – The jq schema to use to extract the data or text from
the JSON.
content_key (str) – The key to use to extract the content from the JSON if
the jq_schema results to a list of objects (dict).
metadata_func (Callable[Dict, Dict]) – A function that takes in the JSON
object extracted by the jq_schema and the default metadata and returns
a dict of the updated metadata.
text_content (bool) – Boolean flag to indicate whether the content is in
string format, default to True.
json_lines (bool) – Boolean flag to indicate whether the input is in
JSON Lines format.
Methods
__init__(file_path, jq_schema[, ...])
Initialize the JSONLoader.
lazy_load()
A lazy loader for Documents.
load()
Load and return documents from the JSON file.
load_and_split([text_splitter])
Load Documents and split into chunks.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
a73b4aedc2b6-1
|
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load and return documents from the JSON file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.json_loader.JSONLoader.html
|
221fe4577134-0
|
langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load CSV files.
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html
|
67fd0e80f68b-0
|
langchain.document_loaders.github.GitHubIssuesLoader¶
class langchain.document_loaders.github.GitHubIssuesLoader(*, repo: str, access_token: str, include_prs: bool = True, milestone: Optional[Union[int, Literal['*', 'none']]] = None, state: Optional[Literal['open', 'closed', 'all']] = None, assignee: Optional[str] = None, creator: Optional[str] = None, mentioned: Optional[str] = None, labels: Optional[List[str]] = None, sort: Optional[Literal['created', 'updated', 'comments']] = None, direction: Optional[Literal['asc', 'desc']] = None, since: Optional[str] = None)[source]¶
Bases: BaseGitHubLoader
Load issues of a GitHub repository.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: str [Required]¶
Personal access token - see https://github.com/settings/tokens?type=beta
param assignee: Optional[str] = None¶
Filter on assigned user. Pass ‘none’ for no user and ‘*’ for any user.
param creator: Optional[str] = None¶
Filter on the user that created the issue.
param direction: Optional[Literal['asc', 'desc']] = None¶
The direction to sort the results by. Can be one of: ‘asc’, ‘desc’.
param include_prs: bool = True¶
If True include Pull Requests in results, otherwise ignore them.
param labels: Optional[List[str]] = None¶
Label names to filter one. Example: bug,ui,@high.
param mentioned: Optional[str] = None¶
Filter on a user that’s mentioned in the issue.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
67fd0e80f68b-1
|
Filter on a user that’s mentioned in the issue.
param milestone: Optional[Union[int, Literal['*', 'none']]] = None¶
If integer is passed, it should be a milestone’s number field.
If the string ‘*’ is passed, issues with any milestone are accepted.
If the string ‘none’ is passed, issues without milestones are returned.
param repo: str [Required]¶
Name of repository
param since: Optional[str] = None¶
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
param sort: Optional[Literal['created', 'updated', 'comments']] = None¶
What to sort results by. Can be one of: ‘created’, ‘updated’, ‘comments’.
Default is ‘created’.
param state: Optional[Literal['open', 'closed', 'all']] = None¶
Filter on issue state. Can be one of: ‘open’, ‘closed’, ‘all’.
lazy_load() → Iterator[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load() → List[Document][source]¶
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
67fd0e80f68b-2
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_issue(issue: dict) → Document[source]¶
Create Document objects from a list of GitHub issues.
validator validate_environment » all fields¶
Validate that access token exists in environment.
validator validate_since » since[source]¶
property headers: Dict[str, str]¶
property query_params: str¶
Create query parameters for GitHub API.
property url: str¶
Create URL for GitHub API.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.github.GitHubIssuesLoader.html
|
c39c63dc3888-0
|
langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None)[source]¶
Bases: BasePDFLoader
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Initialize with file path.
Methods
__init__(file_path[, password])
Initialize with file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html
|
ab4769c9beec-0
|
langchain.document_loaders.email.OutlookMessageLoader¶
class langchain.document_loaders.email.OutlookMessageLoader(file_path: str)[source]¶
Bases: BaseLoader
Loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor
Initialize with a file path.
Parameters
file_path – The path to the Outlook Message file.
Methods
__init__(file_path)
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.OutlookMessageLoader.html
|
0825f258de41-0
|
langchain.document_loaders.parsers.language.python.PythonSegmenter¶
class langchain.document_loaders.parsers.language.python.PythonSegmenter(code: str)[source]¶
Bases: CodeSegmenter
The code segmenter for Python.
Methods
__init__(code)
extract_functions_classes()
is_valid()
simplify_code()
extract_functions_classes() → List[str][source]¶
is_valid() → bool[source]¶
simplify_code() → str[source]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.python.PythonSegmenter.html
|
3ec0dc0f5bd5-0
|
langchain.document_loaders.dataframe.DataFrameLoader¶
class langchain.document_loaders.dataframe.DataFrameLoader(data_frame: Any, page_content_column: str = 'text')[source]¶
Bases: BaseLoader
Load Pandas DataFrame.
Initialize with dataframe object.
Parameters
data_frame – Pandas DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.DataFrameLoader.html
|
289c8df19642-0
|
langchain.document_loaders.blackboard.BlackboardLoader¶
class langchain.document_loaders.blackboard.BlackboardLoader(blackboard_course_url: str, bbrouter: str, load_all_recursively: bool = True, basic_auth: Optional[Tuple[str, str]] = None, cookies: Optional[dict] = None)[source]¶
Bases: WebBaseLoader
Loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browser’s developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Initialize with blackboard course url.
The BbRouter cookie is required for most blackboard courses.
Parameters
blackboard_course_url – Blackboard course url.
bbrouter – BbRouter cookie.
load_all_recursively – If True, load all documents recursively.
basic_auth – Basic auth credentials.
cookies – Cookies.
Raises
ValueError – If blackboard course url is invalid.
Methods
__init__(blackboard_course_url, bbrouter[, ...])
Initialize with blackboard course url.
aload()
Load text from the urls in web_path async into Documents.
check_bs4()
Check if BeautifulSoup4 is installed.
download(path)
Download a file from an url.
fetch_all(urls)
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
289c8df19642-1
|
download(path)
Download a file from an url.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_filename(url)
Parse the filename from an url.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
base_url
Base url of the blackboard course.
folder_path
Path to the folder containing the documents.
load_all_recursively
If True, load all documents recursively.
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
check_bs4() → None[source]¶
Check if BeautifulSoup4 is installed.
Raises
ImportError – If BeautifulSoup4 is not installed.
download(path: str) → None[source]¶
Download a file from an url.
Parameters
path – Path to the file.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load data into Document objects.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
289c8df19642-2
|
Load data into Document objects.
Returns
List of Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_filename(url: str) → str[source]¶
Parse the filename from an url.
Parameters
url – Url to parse the filename from.
Returns
The filename.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
base_url: str¶
Base url of the blackboard course.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
folder_path: str¶
Path to the folder containing the documents.
load_all_recursively: bool¶
If True, load all documents recursively.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blackboard.BlackboardLoader.html
|
297f35ea5968-0
|
langchain.document_loaders.parsers.html.bs4.BS4HTMLParser¶
class langchain.document_loaders.parsers.html.bs4.BS4HTMLParser(*, features: str = 'lxml', get_text_separator: str = '', **kwargs: Any)[source]¶
Bases: BaseBlobParser
Parser that uses beautiful soup to parse HTML files.
Initialize a bs4 based HTML parser.
Methods
__init__(*[, features, get_text_separator])
Initialize a bs4 based HTML parser.
lazy_parse(blob)
Load HTML document into document objects.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Load HTML document into document objects.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.html.bs4.BS4HTMLParser.html
|
9407594d74f4-0
|
langchain.document_loaders.weather.WeatherDataLoader¶
class langchain.document_loaders.weather.WeatherDataLoader(client: OpenWeatherMapAPIWrapper, places: Sequence[str])[source]¶
Bases: BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMap’s free
API. Checkout ‘https://openweathermap.org/appid’ for more on how to generate a free
OpenWeatherMap API.
Initialize with parameters.
Methods
__init__(client, places)
Initialize with parameters.
from_params(places, *[, openweathermap_api_key])
lazy_load()
Lazily load weather data for the given locations.
load()
Load weather data for the given locations.
load_and_split([text_splitter])
Load Documents and split into chunks.
classmethod from_params(places: Sequence[str], *, openweathermap_api_key: Optional[str] = None) → WeatherDataLoader[source]¶
lazy_load() → Iterator[Document][source]¶
Lazily load weather data for the given locations.
load() → List[Document][source]¶
Load weather data for the given locations.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.weather.WeatherDataLoader.html
|
e970392e09f7-0
|
langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Bases: ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
Methods
__init__()
yield_blobs()
A lazy loader for raw data represented by LangChain's Blob object.
abstract yield_blobs() → Iterable[Blob][source]¶
A lazy loader for raw data represented by LangChain’s Blob object.
Returns
A generator over blobs
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html
|
72c73fe36e30-0
|
langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Bases: BaseLoader
Loader that uses beautiful soup to parse HTML files.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load HTML document into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load HTML document into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html
|
6bb002cba89c-0
|
langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Bases: BaseLoader
Load PySpark DataFrames
Initialize with a Spark DataFrame object.
Methods
__init__([spark_session, df, ...])
Initialize with a Spark DataFrame object.
get_num_rows()
Gets the amount of "feasible" rows for the DataFrame
lazy_load()
A lazy loader for document content.
load()
Load from the dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
get_num_rows() → Tuple[int, int][source]¶
Gets the amount of “feasible” rows for the DataFrame
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html
|
0dc6d3b04307-0
|
langchain.document_loaders.mastodon.MastodonTootsLoader¶
class langchain.document_loaders.mastodon.MastodonTootsLoader(mastodon_accounts: Sequence[str], number_toots: Optional[int] = 100, exclude_replies: bool = False, access_token: Optional[str] = None, api_base_url: str = 'https://mastodon.social')[source]¶
Bases: BaseLoader
Mastodon toots loader.
Instantiate Mastodon toots loader.
Parameters
mastodon_accounts – The list of Mastodon accounts to query.
number_toots – How many toots to pull for each account. Default is 100.
exclude_replies – Whether to exclude reply toots from the load.
Default is False.
access_token – An access token if toots are loaded as a Mastodon app. Can
also be specified via the environment variables “MASTODON_ACCESS_TOKEN”.
api_base_url – A Mastodon API base URL to talk to, if not using the default.
Default is “https://mastodon.social”.
Methods
__init__(mastodon_accounts[, number_toots, ...])
Instantiate Mastodon toots loader.
lazy_load()
A lazy loader for Documents.
load()
Load toots into documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load toots into documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.mastodon.MastodonTootsLoader.html
|
023e1c897d1a-0
|
langchain.document_loaders.youtube.YoutubeLoader¶
class langchain.document_loaders.youtube.YoutubeLoader(video_id: str, add_video_info: bool = False, language: Union[str, Sequence[str]] = 'en', translation: str = 'en', continue_on_failure: bool = False)[source]¶
Bases: BaseLoader
Loader that loads Youtube transcripts.
Initialize with YouTube video ID.
Methods
__init__(video_id[, add_video_info, ...])
Initialize with YouTube video ID.
extract_video_id(youtube_url)
Extract video id from common YT urls.
from_youtube_url(youtube_url, **kwargs)
Given youtube URL, load video.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
static extract_video_id(youtube_url: str) → str[source]¶
Extract video id from common YT urls.
classmethod from_youtube_url(youtube_url: str, **kwargs: Any) → YoutubeLoader[source]¶
Given youtube URL, load video.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.youtube.YoutubeLoader.html
|
b434e991cb82-0
|
langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader(file: Union[IO, Sequence[IO]], mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileIOLoader
UnstructuredAPIFileIOLoader uses the Unstructured API to load files.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
```python
from langchain.document_loaders import UnstructuredAPIFileLoader
with open(“example.pdf”, “rb”) as f:
loader = UnstructuredFileAPILoader(f, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
```
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html
|
b434e991cb82-1
|
Initialize with file path.
Methods
__init__(file[, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileIOLoader.html
|
8600292c08a0-0
|
langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Bases: UnstructuredFileLoader
Loader that uses unstructured to load open office ODT files.
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html
|
feecd0461c4c-0
|
langchain.document_loaders.parsers.txt.TextParser¶
class langchain.document_loaders.parsers.txt.TextParser[source]¶
Bases: BaseBlobParser
Parser for text blobs.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html
|
cc20d7ac98bd-0
|
langchain.document_loaders.python.PythonLoader¶
class langchain.document_loaders.python.PythonLoader(file_path: str)[source]¶
Bases: TextLoader
Load Python files, respecting any non-default encoding if specified.
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load from file path.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load from file path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.python.PythonLoader.html
|
1a4f577c1edb-0
|
langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader¶
class langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(conn_str: str, container: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading Documents from Azure Blob Storage.
Initialize with connection string, container and blob prefix.
Methods
__init__(conn_str, container[, prefix])
Initialize with connection string, container and blob prefix.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
conn_str
Connection string for Azure Blob Storage.
container
Container name.
prefix
Prefix for blob names.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
conn_str¶
Connection string for Azure Blob Storage.
container¶
Container name.
prefix¶
Prefix for blob names.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader.html
|
f0bd5bce2eaf-0
|
langchain.document_loaders.gitbook.GitbookLoader¶
class langchain.document_loaders.gitbook.GitbookLoader(web_page: str, load_all_paths: bool = False, base_url: Optional[str] = None, content_selector: str = 'main')[source]¶
Bases: WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
Parameters
web_page – The web page to load or the starting point from where
relative paths are discovered.
load_all_paths – If set to True, all relative paths in the navbar
are loaded instead of only web_page.
base_url – If load_all_paths is True, the relative paths are
appended to this base url. Defaults to web_page.
content_selector – The CSS selector for the content to load.
Defaults to “main”.
Methods
__init__(web_page[, load_all_paths, ...])
Initialize with web page and whether to load all paths.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Fetch text from one single GitBook page.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
Attributes
bs_get_text_kwargs
kwargs for beatifulsoup4 get_text
default_parser
Default parser to use for BeautifulSoup.
raise_for_status
Raise an exception if http status code denotes an error.
requests_kwargs
kwargs for requests
requests_per_second
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html
|
f0bd5bce2eaf-1
|
requests_kwargs
kwargs for requests
requests_per_second
Max number of concurrent requests to make.
web_path
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Fetch text from one single GitBook page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
bs_get_text_kwargs: Dict[str, Any] = {}¶
kwargs for beatifulsoup4 get_text
default_parser: str = 'html.parser'¶
Default parser to use for BeautifulSoup.
raise_for_status: bool = False¶
Raise an exception if http status code denotes an error.
requests_kwargs: Dict[str, Any] = {}¶
kwargs for requests
requests_per_second: int = 2¶
Max number of concurrent requests to make.
property web_path: str¶
web_paths: List[str]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gitbook.GitbookLoader.html
|
fe5c13962b78-0
|
langchain.document_loaders.pdf.BasePDFLoader¶
class langchain.document_loaders.pdf.BasePDFLoader(file_path: str)[source]¶
Bases: BaseLoader, ABC
Base loader class for PDF files.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
Attributes
source
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
abstract load() → List[Document]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
property source: str¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.BasePDFLoader.html
|
57fd968f01ff-0
|
langchain.document_loaders.snowflake_loader.SnowflakeLoader¶
class langchain.document_loaders.snowflake_loader.SnowflakeLoader(query: str, user: str, password: str, account: str, warehouse: str, role: str, database: str, schema: str, parameters: Optional[Dict[str, Any]] = None, page_content_columns: Optional[List[str]] = None, metadata_columns: Optional[List[str]] = None)[source]¶
Bases: BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Initialize Snowflake document loader.
Parameters
query – The query to run in Snowflake.
user – Snowflake user.
password – Snowflake password.
account – Snowflake account.
warehouse – Snowflake warehouse.
role – Snowflake role.
database – Snowflake database
schema – Snowflake schema
page_content_columns – Optional. Columns written to Document page_content.
metadata_columns – Optional. Columns written to Document metadata.
Methods
__init__(query, user, password, account, ...)
Initialize Snowflake document loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
57fd968f01ff-1
|
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.snowflake_loader.SnowflakeLoader.html
|
6ebadc7c5781-0
|
langchain.document_loaders.whatsapp_chat.concatenate_rows¶
langchain.document_loaders.whatsapp_chat.concatenate_rows(date: str, sender: str, text: str) → str[source]¶
Combine message information in a readable format ready to be used.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.whatsapp_chat.concatenate_rows.html
|
4fba2a21c30c-0
|
langchain.document_loaders.unstructured.validate_unstructured_version¶
langchain.document_loaders.unstructured.validate_unstructured_version(min_unstructured_version: str) → None[source]¶
Raises an error if the unstructured version does not exceed the
specified minimum.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.validate_unstructured_version.html
|
d077b494fcab-0
|
langchain.document_loaders.parsers.pdf.PyPDFium2Parser¶
class langchain.document_loaders.parsers.pdf.PyPDFium2Parser[source]¶
Bases: BaseBlobParser
Parse PDFs with PyPDFium2.
Initialize the parser.
Methods
__init__()
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.PyPDFium2Parser.html
|
1840fedf226a-0
|
langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from s3.
Initialize with bucket and key name.
Methods
__init__(bucket[, prefix])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html
|
2e973dda6a1a-0
|
langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader¶
class langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]¶
Bases: BaseLoader
Loading logic for loading documents from Tencent Cloud COS.
Initialize with COS config, bucket and prefix.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param prefix(str): prefix.
Methods
__init__(conf, bucket[, prefix])
Initialize with COS config, bucket and prefix.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader.html
|
24473bed3080-0
|
langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters¶
class langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters[source]¶
Bases: TypedDict
Parameters for the embaas document extraction API.
Methods
__init__(*args, **kwargs)
clear()
copy()
fromkeys([value])
Create a new dictionary with keys from iterable and values set to value.
get(key[, default])
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k[,d])
If the key is not found, return the default if given; otherwise, raise a KeyError.
popitem()
Remove and return a (key, value) pair as a 2-tuple.
setdefault(key[, default])
Insert key with a value of default if key is not in the dictionary.
update([E, ]**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
Attributes
mime_type
The mime type of the document.
file_extension
The file extension of the document.
file_name
The file name of the document.
should_chunk
Whether to chunk the document into pages.
chunk_size
The maximum size of the text chunks.
chunk_overlap
The maximum overlap allowed between chunks.
chunk_splitter
The text splitter class name for creating chunks.
separators
The separators for chunks.
should_embed
Whether to create embeddings for the document in the response.
model
|
https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.