id
stringlengths 14
15
| text
stringlengths 22
2.51k
| source
stringlengths 61
154
|
|---|---|---|
7cba0af7fead-1
|
embed_query(text: str) → List[float][source]¶
Generates an embedding for a single piece of text.
Parameters
text (str) – The text to generate an embedding for.
Returns
The embedding for the text.
validator validate_environment » all fields[source]¶
Validates that the Spacy package and the ‘en_core_web_sm’ model are installed.
Parameters
values (Dict) – The values provided to the class constructor.
Returns
The validated values.
Raises
ValueError – If the Spacy package or the ‘en_core_web_sm’
model are not installed. –
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.spacy_embeddings.SpacyEmbeddings.html
|
6704695e37d4-0
|
langchain.embeddings.deepinfra.DeepInfraEmbeddings¶
class langchain.embeddings.deepinfra.DeepInfraEmbeddings(*, model_id: str = 'sentence-transformers/clip-ViT-B-32', normalize: bool = False, embed_instruction: str = 'passage: ', query_instruction: str = 'query: ', model_kwargs: Optional[dict] = None, deepinfra_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around Deep Infra’s embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param deepinfra_api_token: Optional[str] = None¶
param embed_instruction: str = 'passage: '¶
Instruction used to embed documents.
param model_id: str = 'sentence-transformers/clip-ViT-B-32'¶
Embeddings model to use.
param model_kwargs: Optional[dict] = None¶
Other model keyword args
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html
|
6704695e37d4-1
|
param model_kwargs: Optional[dict] = None¶
Other model keyword args
param normalize: bool = False¶
whether to normalize the computed embeddings
param query_instruction: str = 'query: '¶
Instruction used to embed the query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a Deep Infra deployed embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.deepinfra.DeepInfraEmbeddings.html
|
8af2a2acc7e6-0
|
langchain.embeddings.dashscope.embed_with_retry¶
langchain.embeddings.dashscope.embed_with_retry(embeddings: DashScopeEmbeddings, **kwargs: Any) → Any[source]¶
Use tenacity to retry the embedding call.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.embed_with_retry.html
|
59f4c723c6b9-0
|
langchain.embeddings.base.Embeddings¶
class langchain.embeddings.base.Embeddings[source]¶
Bases: ABC
Interface for embedding models.
Methods
__init__()
aembed_documents(texts)
Embed search docs.
aembed_query(text)
Embed query text.
embed_documents(texts)
Embed search docs.
embed_query(text)
Embed query text.
async aembed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
async aembed_query(text: str) → List[float][source]¶
Embed query text.
abstract embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
abstract embed_query(text: str) → List[float][source]¶
Embed query text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.base.Embeddings.html
|
9bf660541009-0
|
langchain.embeddings.openai.OpenAIEmbeddings¶
class langchain.embeddings.openai.OpenAIEmbeddings(*, client: Any = None, model: str = 'text-embedding-ada-002', deployment: str = 'text-embedding-ada-002', openai_api_version: Optional[str] = None, openai_api_base: Optional[str] = None, openai_api_type: Optional[str] = None, openai_proxy: Optional[str] = None, embedding_ctx_length: int = 8191, openai_api_key: Optional[str] = None, openai_organization: Optional[str] = None, allowed_special: Union[Literal['all'], Set[str]] = {}, disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all', chunk_size: int = 1000, max_retries: int = 6, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, headers: Any = None, tiktoken_model_name: Optional[str] = None, show_progress_bar: bool = False)[source]¶
Bases: BaseModel, Embeddings
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
9bf660541009-1
|
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], Set[str]] = {}¶
param chunk_size: int = 1000¶
Maximum number of texts to embed in each batch
param deployment: str = 'text-embedding-ada-002'¶
param disallowed_special: Union[Literal['all'], Set[str], Sequence[str]] = 'all'¶
param embedding_ctx_length: int = 8191¶
param headers: Any = None¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param model: str = 'text-embedding-ada-002'¶
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
9bf660541009-2
|
param openai_api_key: Optional[str] = None¶
param openai_api_type: Optional[str] = None¶
param openai_api_version: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout in seconds for the OpenAPI request.
param show_progress_bar: bool = False¶
Whether to show a progress bar when embedding.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
async aembed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to OpenAI’s embedding endpoint async for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
async aembed_query(text: str) → List[float][source]¶
Call out to OpenAI’s embedding endpoint async for embedding query text.
Parameters
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
9bf660541009-3
|
Call out to OpenAI’s embedding endpoint async for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]¶
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html
|
41fde8dac96a-0
|
langchain.embeddings.octoai_embeddings.OctoAIEmbeddings¶
class langchain.embeddings.octoai_embeddings.OctoAIEmbeddings(*, endpoint_url: Optional[str] = None, model_kwargs: Optional[dict] = None, octoai_api_token: Optional[str] = None, embed_instruction: str = 'Represent this input: ', query_instruction: str = 'Represent the question for retrieving similar documents: ')[source]¶
Bases: BaseModel, Embeddings
Wrapper around OctoAI Compute Service embedding models.
The environment variable OCTOAI_API_TOKEN should be set
with your API token, or it can be passed
as a named parameter to the constructor.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_instruction: str = 'Represent this input: '¶
Instruction to use for embedding documents.
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Keyword arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param query_instruction: str = 'Represent the question for retrieving similar documents: '¶
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute document embeddings using an OctoAI instruct model.
embed_query(text: str) → List[float][source]¶
Compute query embedding using an OctoAI instruct model.
validator validate_environment » all fields[source]¶
Ensure that the API key and python package exist in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.octoai_embeddings.OctoAIEmbeddings.html
|
b1dee6b8a1e4-0
|
langchain.embeddings.google_palm.GooglePalmEmbeddings¶
class langchain.embeddings.google_palm.GooglePalmEmbeddings(*, client: Any = None, google_api_key: Optional[str] = None, model_name: str = 'models/embedding-gecko-001')[source]¶
Bases: BaseModel, Embeddings
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param client: Any = None¶
param google_api_key: Optional[str] = None¶
param model_name: str = 'models/embedding-gecko-001'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed search docs.
embed_query(text: str) → List[float][source]¶
Embed query text.
validator validate_environment » all fields[source]¶
Validate api key, python package exists.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.google_palm.GooglePalmEmbeddings.html
|
1a4c24a9396c-0
|
langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings¶
class langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings(*, endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction: str = 'Represent the document for retrieval: ', query_instruction: str = 'Represent the question for retrieving supporting documents: ', retry_sleep: float = 1.0, mosaicml_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_instruction: str = 'Represent the document for retrieval: '¶
Instruction used to embed documents.
param endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'¶
Endpoint URL to use.
param mosaicml_api_token: Optional[str] = None¶
param query_instruction: str = 'Represent the question for retrieving supporting documents: '¶
Instruction used to embed the query.
param retry_sleep: float = 1.0¶
How long to try sleeping for if a rate limit is encountered
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings.html
|
1a4c24a9396c-1
|
How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.mosaicml.MosaicMLInstructorEmbeddings.html
|
58973ddbaf50-0
|
langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings¶
class langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable = <function load_embedding_model>, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'sentence_transformers', 'torch'], inference_kwargs: ~typing.Any = None, model_id: str = 'sentence-transformers/all-mpnet-base-v2')[source]¶
Bases: SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-1
|
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_id: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
param model_load_fn: Callable = <function load_embedding_model>¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']¶
Requirements to install on hardware to inference the model.
param pipeline_ref: Any = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-2
|
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-3
|
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]]¶
Compute doc embeddings using a HuggingFace transformer model.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-4
|
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-5
|
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-6
|
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
58973ddbaf50-7
|
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
|
bfd0c04d34bf-0
|
langchain.embeddings.self_hosted_hugging_face.load_embedding_model¶
langchain.embeddings.self_hosted_hugging_face.load_embedding_model(model_id: str, instruct: bool = False, device: int = 0) → Any[source]¶
Load the embedding model.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.load_embedding_model.html
|
d1994434d818-0
|
langchain.embeddings.embaas.EmbaasEmbeddings¶
class langchain.embeddings.embaas.EmbaasEmbeddings(*, model: str = 'e5-large-v2', instruction: Optional[str] = None, api_url: str = 'https://api.embaas.io/v1/embeddings/', embaas_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around embaas’s embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = 'https://api.embaas.io/v1/embeddings/'¶
The URL for the embaas embeddings API.
param embaas_api_key: Optional[str] = None¶
param instruction: Optional[str] = None¶
Instruction used for domain-specific embeddings.
param model: str = 'e5-large-v2'¶
The model used for embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Get embeddings for a list of texts.
Parameters
texts – The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
|
d1994434d818-1
|
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Get embeddings for a single text.
Parameters
text – The text to get embeddings for.
Returns
List of embeddings.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.embaas.EmbaasEmbeddings.html
|
de601927bbc3-0
|
langchain.embeddings.dashscope.DashScopeEmbeddings¶
class langchain.embeddings.dashscope.DashScopeEmbeddings(*, client: Any = None, model: str = 'text-embedding-v1', dashscope_api_key: Optional[str] = None, max_retries: int = 5)[source]¶
Bases: BaseModel, Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param dashscope_api_key: Optional[str] = None¶
Maximum number of retries to make when generating.
param max_retries: int = 5¶
param model: str = 'text-embedding-v1'¶
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to DashScope’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
|
de601927bbc3-1
|
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to DashScope’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
validator validate_environment » all fields[source]¶
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.dashscope.DashScopeEmbeddings.html
|
2f3c083275ab-0
|
langchain.embeddings.minimax.MiniMaxEmbeddings¶
class langchain.embeddings.minimax.MiniMaxEmbeddings(*, endpoint_url: str = 'https://api.minimax.chat/v1/embeddings', model: str = 'embo-01', embed_type_db: str = 'db', embed_type_query: str = 'query', minimax_group_id: Optional[str] = None, minimax_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around MiniMax’s embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param embed_type_db: str = 'db'¶
For embed_documents
param embed_type_query: str = 'query'¶
For embed_query
param endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'¶
Endpoint URL to use.
param minimax_api_key: Optional[str] = None¶
API Key for MiniMax API.
param minimax_group_id: Optional[str] = None¶
Group ID for MiniMax API.
param model: str = 'embo-01'¶
Embeddings model name to use.
embed(texts: List[str], embed_type: str) → List[List[float]][source]¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
|
2f3c083275ab-1
|
embed_documents(texts: List[str]) → List[List[float]][source]¶
Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that group id and api key exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.minimax.MiniMaxEmbeddings.html
|
d31f9fd9e3bf-0
|
langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding¶
class langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding(*, client: Any = None, model: Optional[str] = 'luminous-base', hosting: Optional[str] = 'https://api.aleph-alpha.com', normalize: Optional[bool] = True, compress_to_size: Optional[int] = 128, contextual_control_threshold: Optional[int] = None, control_log_additive: Optional[bool] = True, aleph_alpha_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param aleph_alpha_api_key: Optional[str] = None¶
API key for Aleph Alpha API.
param compress_to_size: Optional[int] = 128¶
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
param contextual_control_threshold: Optional[int] = None¶
Attention control parameters only apply to those tokens that have
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
|
d31f9fd9e3bf-1
|
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
param control_log_additive: Optional[bool] = True¶
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
param hosting: Optional[str] = 'https://api.aleph-alpha.com'¶
Optional parameter that specifies which datacenters may process the request.
param model: Optional[str] = 'luminous-base'¶
Model name to use.
param normalize: Optional[bool] = True¶
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding.html
|
8b97d480fe0a-0
|
langchain.embeddings.modelscope_hub.ModelScopeEmbeddings¶
class langchain.embeddings.modelscope_hub.ModelScopeEmbeddings(*, embed: Any = None, model_id: str = 'damo/nlp_corom_sentence-embedding_english-base')[source]¶
Bases: BaseModel, Embeddings
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Initialize the modelscope
param embed: Any = None¶
param model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.modelscope_hub.ModelScopeEmbeddings.html
|
8fc4e78a51ca-0
|
langchain.embeddings.bedrock.BedrockEmbeddings¶
class langchain.embeddings.bedrock.BedrockEmbeddings(*, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str = 'amazon.titan-e1t-medium', model_kwargs: Optional[Dict] = None)[source]¶
Bases: BaseModel, Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param model_id: str = 'amazon.titan-e1t-medium'¶
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: Optional[str] = None¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
|
8fc4e78a51ca-1
|
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts: List[str], chunk_size: int = 1) → List[List[float]][source]¶
Compute doc embeddings using a Bedrock model.
Parameters
texts – The list of texts to embed.
chunk_size – Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a Bedrock model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that AWS credentials to and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.bedrock.BedrockEmbeddings.html
|
9c50c40801d0-0
|
langchain.embeddings.cohere.CohereEmbeddings¶
class langchain.embeddings.cohere.CohereEmbeddings(*, client: Any = None, model: str = 'embed-english-v2.0', truncate: Optional[str] = None, cohere_api_key: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cohere_api_key: Optional[str] = None¶
param model: str = 'embed-english-v2.0'¶
Model name to use.
param truncate: Optional[str] = None¶
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html
|
9c50c40801d0-1
|
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.cohere.CohereEmbeddings.html
|
5393ca2544d6-0
|
langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler¶
class langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler[source]¶
Bases: ContentHandlerBase[List[str], List[List[float]]]
Content handler for LLM class.
Methods
__init__()
transform_input(prompt, model_kwargs)
Transforms the input to a format that model can accept as the request Body.
transform_output(output)
Transforms the output from the model to string that the LLM class expects.
Attributes
accepts
The MIME type of the response data returned from endpoint
content_type
The MIME type of the input data passed to endpoint
abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes¶
Transforms the input to a format that model can accept
as the request Body. Should return bytes or seekable file
like object in the format specified in the content_type
request header.
abstract transform_output(output: bytes) → OUTPUT_TYPE¶
Transforms the output from the model to string that
the LLM class expects.
accepts: Optional[str] = 'text/plain'¶
The MIME type of the response data returned from endpoint
content_type: Optional[str] = 'text/plain'¶
The MIME type of the input data passed to endpoint
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler.html
|
c2f9bc9d3f9d-0
|
langchain.embeddings.self_hosted.SelfHostedEmbeddings¶
class langchain.embeddings.self_hosted.SelfHostedEmbeddings(*, cache: ~typing.Optional[bool] = None, verbose: bool = None, callbacks: ~typing.Optional[~typing.Union[~typing.List[~langchain.callbacks.base.BaseCallbackHandler], ~langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: ~typing.Optional[~langchain.callbacks.base.BaseCallbackManager] = None, tags: ~typing.Optional[~typing.List[str]] = None, metadata: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, pipeline_ref: ~typing.Any = None, client: ~typing.Any = None, inference_fn: ~typing.Callable = <function _embed_documents>, hardware: ~typing.Any = None, model_load_fn: ~typing.Callable, load_fn_kwargs: ~typing.Optional[dict] = None, model_reqs: ~typing.List[str] = ['./', 'torch'], inference_kwargs: ~typing.Any = None)[source]¶
Bases: SelfHostedPipeline, Embeddings
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-1
|
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Init the pipeline with an auxiliary function.
The load function must be in global scope to be imported
and run on the server, i.e. in a module and not a REPL or closure.
Then, initialize the remote inference function.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param hardware: Any = None¶
Remote hardware to send the inference function to.
param inference_fn: Callable = <function _embed_documents>¶
Inference function to extract the embeddings on the remote hardware.
param inference_kwargs: Any = None¶
Any kwargs to pass to the model’s inference function.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-2
|
Any kwargs to pass to the model’s inference function.
param load_fn_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model load function.
param metadata: Optional[Dict[str, Any]] = None¶
Metadata to add to the run trace.
param model_load_fn: Callable [Required]¶
Function to load the model remotely on the server.
param model_reqs: List[str] = ['./', 'torch']¶
Requirements to install on hardware to inference the model.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-3
|
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Asynchronously pass a string to the model and return a string prediction.
Use this method when calling pure text generation models and only the topcandidate generation is needed.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Asynchronously pass messages to the model and return a message prediction.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-4
|
Asynchronously pass messages to the model and return a message prediction.
Use this method when calling chat models and only the topcandidate generation is needed.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → LLM¶
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-5
|
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
take advantage of batched calls,
need more output from the model than just the top generated value,
are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models).
Parameters
prompts – List of PromptValues. A PromptValue is an object that can be
converted to match the format of any language model (string for pure
text generation models and BaseMessages for chat models).
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks – Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
Useful for checking if an input will fit in a model’s context window.
Parameters
text – The string input to tokenize.
Returns
The integer number of tokens in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the messages.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-6
|
Get the number of tokens in the messages.
Useful for checking if an input will fit in a model’s context window.
Parameters
messages – The message inputs to tokenize.
Returns
The sum of the number of tokens across the messages.
get_token_ids(text: str) → List[int]¶
Return the ordered ids of the tokens in a text.
Parameters
text – The string input to tokenize.
Returns
A list of ids corresponding to the tokens in the text, in order they occurin the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Pass a single string input to the model and return a string prediction.
Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages.
Parameters
text – String input to pass to the model.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a string.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Pass a message sequence to the model and return a message prediction.
Use this method when passing in chat messages. If you want to pass in raw text,use predict.
Parameters
messages – A sequence of chat messages corresponding to a single model input.
stop – Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
**kwargs – Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns
Top model prediction as a message.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
c2f9bc9d3f9d-7
|
to the model provider API call.
Returns
Top model prediction as a message.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted.SelfHostedEmbeddings.html
|
0ff9b74a6633-0
|
langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings¶
class langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings(*, client: Any = None, repo_id: str = 'sentence-transformers/all-mpnet-base-v2', task: Optional[str] = 'feature-extraction', model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param huggingfacehub_api_token: Optional[str] = None¶
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param repo_id: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
param task: Optional[str] = 'feature-extraction'¶
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html
|
0ff9b74a6633-1
|
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface_hub.HuggingFaceHubEmbeddings.html
|
6818466a460b-0
|
langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings¶
class langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings(*, client: Any = None, endpoint_name: str = '', region_name: str = '', credentials_profile_name: Optional[str] = None, content_handler: EmbeddingsContentHandler, model_kwargs: Optional[Dict] = None, endpoint_kwargs: Optional[Dict] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]¶
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
|
6818466a460b-1
|
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param endpoint_kwargs: Optional[Dict] = None¶
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
param endpoint_name: str = ''¶
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: str = ''¶
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]¶
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that AWS credentials to and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.sagemaker_endpoint.SagemakerEndpointEmbeddings.html
|
d0e9a454038d-0
|
langchain.embeddings.huggingface.HuggingFaceEmbeddings¶
class langchain.embeddings.huggingface.HuggingFaceEmbeddings(*, client: Any = None, model_name: str = 'sentence-transformers/all-mpnet-base-v2', cache_folder: Optional[str] = None, model_kwargs: Dict[str, Any] = None, encode_kwargs: Dict[str, Any] = None)[source]¶
Bases: BaseModel, Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Initialize the sentence_transformer.
param cache_folder: Optional[str] = None¶
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
param encode_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass when calling the encode method of the model.
param model_kwargs: Dict[str, Any] [Optional]¶
Key word arguments to pass to the model.
param model_name: str = 'sentence-transformers/all-mpnet-base-v2'¶
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Compute query embeddings using a HuggingFace transformer model.
Parameters
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html
|
d0e9a454038d-1
|
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.huggingface.HuggingFaceEmbeddings.html
|
7afc995f8f1a-0
|
langchain.embeddings.clarifai.ClarifaiEmbeddings¶
class langchain.embeddings.clarifai.ClarifaiEmbeddings(*, stub: Any = None, userDataObject: Any = None, model_id: Optional[str] = None, model_version_id: Optional[str] = None, app_id: Optional[str] = None, user_id: Optional[str] = None, pat: Optional[str] = None, api_base: str = 'https://api.clarifai.com')[source]¶
Bases: BaseModel, Embeddings
Wrapper around Clarifai embedding models.
To use, you should have the clarifai python package installed, and the
environment variable CLARIFAI_PAT set with your personal access token or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import ClarifaiEmbeddings
clarifai = ClarifaiEmbeddings(
model="embed-english-light-v2.0", clarifai_api_key="my-api-key"
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_base: str = 'https://api.clarifai.com'¶
param app_id: Optional[str] = None¶
Clarifai application id to use.
param model_id: Optional[str] = None¶
Model id to use.
param model_version_id: Optional[str] = None¶
Model version id to use.
param pat: Optional[str] = None¶
param userDataObject: Any = None¶
param user_id: Optional[str] = None¶
Clarifai user id to use.
embed_documents(texts: List[str]) → List[List[float]][source]¶
Call out to Clarifai’s embedding models.
Parameters
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html
|
7afc995f8f1a-1
|
Call out to Clarifai’s embedding models.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]¶
Call out to Clarifai’s embedding models.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶
|
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.clarifai.ClarifaiEmbeddings.html
|
309042aacc4c-0
|
langchain.env.get_runtime_environment¶
langchain.env.get_runtime_environment() → dict[source]¶
Get information about the environment.
|
https://api.python.langchain.com/en/latest/env/langchain.env.get_runtime_environment.html
|
f9856cfcd351-0
|
All modules for which code is available
langchain.agents.agent
langchain.agents.agent_toolkits.azure_cognitive_services.toolkit
langchain.agents.agent_toolkits.base
langchain.agents.agent_toolkits.csv.base
langchain.agents.agent_toolkits.file_management.toolkit
langchain.agents.agent_toolkits.gmail.toolkit
langchain.agents.agent_toolkits.jira.toolkit
langchain.agents.agent_toolkits.json.base
langchain.agents.agent_toolkits.json.toolkit
langchain.agents.agent_toolkits.nla.tool
langchain.agents.agent_toolkits.nla.toolkit
langchain.agents.agent_toolkits.office365.toolkit
langchain.agents.agent_toolkits.openapi.base
langchain.agents.agent_toolkits.openapi.planner
langchain.agents.agent_toolkits.openapi.spec
langchain.agents.agent_toolkits.openapi.toolkit
langchain.agents.agent_toolkits.pandas.base
langchain.agents.agent_toolkits.playwright.toolkit
langchain.agents.agent_toolkits.powerbi.base
langchain.agents.agent_toolkits.powerbi.chat_base
langchain.agents.agent_toolkits.powerbi.toolkit
langchain.agents.agent_toolkits.python.base
langchain.agents.agent_toolkits.spark.base
langchain.agents.agent_toolkits.spark_sql.base
langchain.agents.agent_toolkits.spark_sql.toolkit
langchain.agents.agent_toolkits.sql.base
langchain.agents.agent_toolkits.sql.toolkit
langchain.agents.agent_toolkits.vectorstore.base
langchain.agents.agent_toolkits.vectorstore.toolkit
langchain.agents.agent_toolkits.zapier.toolkit
langchain.agents.agent_types
langchain.agents.chat.base
langchain.agents.chat.output_parser
langchain.agents.conversational.base
langchain.agents.conversational.output_parser
langchain.agents.conversational_chat.base
langchain.agents.conversational_chat.output_parser
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-1
|
langchain.agents.conversational_chat.output_parser
langchain.agents.initialize
langchain.agents.load_tools
langchain.agents.loading
langchain.agents.mrkl.base
langchain.agents.mrkl.output_parser
langchain.agents.openai_functions_agent.base
langchain.agents.openai_functions_multi_agent.base
langchain.agents.react.base
langchain.agents.react.output_parser
langchain.agents.schema
langchain.agents.self_ask_with_search.base
langchain.agents.self_ask_with_search.output_parser
langchain.agents.structured_chat.base
langchain.agents.structured_chat.output_parser
langchain.agents.tools
langchain.agents.utils
langchain.cache
langchain.callbacks.aim_callback
langchain.callbacks.argilla_callback
langchain.callbacks.arize_callback
langchain.callbacks.arthur_callback
langchain.callbacks.base
langchain.callbacks.clearml_callback
langchain.callbacks.comet_ml_callback
langchain.callbacks.context_callback
langchain.callbacks.file
langchain.callbacks.flyte_callback
langchain.callbacks.human
langchain.callbacks.infino_callback
langchain.callbacks.manager
langchain.callbacks.mlflow_callback
langchain.callbacks.openai_info
langchain.callbacks.promptlayer_callback
langchain.callbacks.stdout
langchain.callbacks.streaming_aiter
langchain.callbacks.streaming_aiter_final_only
langchain.callbacks.streaming_stdout
langchain.callbacks.streaming_stdout_final_only
langchain.callbacks.streamlit.__init__
langchain.callbacks.streamlit.mutable_expander
langchain.callbacks.streamlit.streamlit_callback_handler
langchain.callbacks.tracers.base
langchain.callbacks.tracers.evaluation
langchain.callbacks.tracers.langchain
langchain.callbacks.tracers.langchain_v1
langchain.callbacks.tracers.run_collector
langchain.callbacks.tracers.schemas
langchain.callbacks.tracers.stdout
langchain.callbacks.tracers.wandb
langchain.callbacks.utils
langchain.callbacks.wandb_callback
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-2
|
langchain.callbacks.utils
langchain.callbacks.wandb_callback
langchain.callbacks.whylabs_callback
langchain.chains.api.base
langchain.chains.api.openapi.chain
langchain.chains.api.openapi.requests_chain
langchain.chains.api.openapi.response_chain
langchain.chains.base
langchain.chains.combine_documents.base
langchain.chains.combine_documents.map_reduce
langchain.chains.combine_documents.map_rerank
langchain.chains.combine_documents.reduce
langchain.chains.combine_documents.refine
langchain.chains.combine_documents.stuff
langchain.chains.constitutional_ai.base
langchain.chains.constitutional_ai.models
langchain.chains.conversation.base
langchain.chains.conversational_retrieval.base
langchain.chains.flare.base
langchain.chains.flare.prompts
langchain.chains.graph_qa.base
langchain.chains.graph_qa.cypher
langchain.chains.graph_qa.hugegraph
langchain.chains.graph_qa.kuzu
langchain.chains.graph_qa.nebulagraph
langchain.chains.graph_qa.sparql
langchain.chains.hyde.base
langchain.chains.llm
langchain.chains.llm_bash.base
langchain.chains.llm_bash.prompt
langchain.chains.llm_checker.base
langchain.chains.llm_math.base
langchain.chains.llm_requests
langchain.chains.llm_summarization_checker.base
langchain.chains.loading
langchain.chains.mapreduce
langchain.chains.moderation
langchain.chains.natbot.base
langchain.chains.natbot.crawler
langchain.chains.openai_functions.base
langchain.chains.openai_functions.citation_fuzzy_match
langchain.chains.openai_functions.extraction
langchain.chains.openai_functions.openapi
langchain.chains.openai_functions.qa_with_structure
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-3
|
langchain.chains.openai_functions.qa_with_structure
langchain.chains.openai_functions.tagging
langchain.chains.openai_functions.utils
langchain.chains.pal.base
langchain.chains.prompt_selector
langchain.chains.qa_generation.base
langchain.chains.qa_with_sources.base
langchain.chains.qa_with_sources.loading
langchain.chains.qa_with_sources.retrieval
langchain.chains.qa_with_sources.vector_db
langchain.chains.query_constructor.base
langchain.chains.query_constructor.ir
langchain.chains.query_constructor.parser
langchain.chains.query_constructor.schema
langchain.chains.question_answering.__init__
langchain.chains.retrieval_qa.base
langchain.chains.router.base
langchain.chains.router.embedding_router
langchain.chains.router.llm_router
langchain.chains.router.multi_prompt
langchain.chains.router.multi_retrieval_qa
langchain.chains.sequential
langchain.chains.sql_database.base
langchain.chains.summarize.__init__
langchain.chains.transform
langchain.chat_models.anthropic
langchain.chat_models.azure_openai
langchain.chat_models.base
langchain.chat_models.fake
langchain.chat_models.google_palm
langchain.chat_models.human
langchain.chat_models.jinachat
langchain.chat_models.openai
langchain.chat_models.promptlayer_openai
langchain.chat_models.vertexai
langchain.client.runner_utils
langchain.docstore.arbitrary_fn
langchain.docstore.base
langchain.docstore.in_memory
langchain.docstore.wikipedia
langchain.document_loaders.acreom
langchain.document_loaders.airbyte_json
langchain.document_loaders.airtable
langchain.document_loaders.apify_dataset
langchain.document_loaders.arxiv
langchain.document_loaders.azlyrics
langchain.document_loaders.azure_blob_storage_container
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-4
|
langchain.document_loaders.azlyrics
langchain.document_loaders.azure_blob_storage_container
langchain.document_loaders.azure_blob_storage_file
langchain.document_loaders.base
langchain.document_loaders.bibtex
langchain.document_loaders.bigquery
langchain.document_loaders.bilibili
langchain.document_loaders.blackboard
langchain.document_loaders.blob_loaders.file_system
langchain.document_loaders.blob_loaders.schema
langchain.document_loaders.blob_loaders.youtube_audio
langchain.document_loaders.blockchain
langchain.document_loaders.brave_search
langchain.document_loaders.chatgpt
langchain.document_loaders.college_confidential
langchain.document_loaders.confluence
langchain.document_loaders.conllu
langchain.document_loaders.csv_loader
langchain.document_loaders.cube_semantic
langchain.document_loaders.dataframe
langchain.document_loaders.diffbot
langchain.document_loaders.directory
langchain.document_loaders.discord
langchain.document_loaders.docugami
langchain.document_loaders.duckdb_loader
langchain.document_loaders.email
langchain.document_loaders.embaas
langchain.document_loaders.epub
langchain.document_loaders.evernote
langchain.document_loaders.excel
langchain.document_loaders.facebook_chat
langchain.document_loaders.fauna
langchain.document_loaders.figma
langchain.document_loaders.gcs_directory
langchain.document_loaders.gcs_file
langchain.document_loaders.generic
langchain.document_loaders.git
langchain.document_loaders.gitbook
langchain.document_loaders.github
langchain.document_loaders.googledrive
langchain.document_loaders.gutenberg
langchain.document_loaders.helpers
langchain.document_loaders.hn
langchain.document_loaders.html
langchain.document_loaders.html_bs
langchain.document_loaders.hugging_face_dataset
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-5
|
langchain.document_loaders.html_bs
langchain.document_loaders.hugging_face_dataset
langchain.document_loaders.ifixit
langchain.document_loaders.image
langchain.document_loaders.image_captions
langchain.document_loaders.imsdb
langchain.document_loaders.iugu
langchain.document_loaders.joplin
langchain.document_loaders.json_loader
langchain.document_loaders.larksuite
langchain.document_loaders.markdown
langchain.document_loaders.mastodon
langchain.document_loaders.max_compute
langchain.document_loaders.mediawikidump
langchain.document_loaders.merge
langchain.document_loaders.mhtml
langchain.document_loaders.modern_treasury
langchain.document_loaders.notebook
langchain.document_loaders.notion
langchain.document_loaders.notiondb
langchain.document_loaders.obsidian
langchain.document_loaders.odt
langchain.document_loaders.onedrive
langchain.document_loaders.onedrive_file
langchain.document_loaders.open_city_data
langchain.document_loaders.org_mode
langchain.document_loaders.parsers.audio
langchain.document_loaders.parsers.generic
langchain.document_loaders.parsers.grobid
langchain.document_loaders.parsers.html.bs4
langchain.document_loaders.parsers.language.code_segmenter
langchain.document_loaders.parsers.language.javascript
langchain.document_loaders.parsers.language.language_parser
langchain.document_loaders.parsers.language.python
langchain.document_loaders.parsers.pdf
langchain.document_loaders.parsers.registry
langchain.document_loaders.parsers.txt
langchain.document_loaders.pdf
langchain.document_loaders.powerpoint
langchain.document_loaders.psychic
langchain.document_loaders.pyspark_dataframe
langchain.document_loaders.python
langchain.document_loaders.readthedocs
langchain.document_loaders.recursive_url_loader
langchain.document_loaders.reddit
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-6
|
langchain.document_loaders.recursive_url_loader
langchain.document_loaders.reddit
langchain.document_loaders.roam
langchain.document_loaders.rst
langchain.document_loaders.rtf
langchain.document_loaders.s3_directory
langchain.document_loaders.s3_file
langchain.document_loaders.sitemap
langchain.document_loaders.slack_directory
langchain.document_loaders.snowflake_loader
langchain.document_loaders.spreedly
langchain.document_loaders.srt
langchain.document_loaders.stripe
langchain.document_loaders.telegram
langchain.document_loaders.tencent_cos_directory
langchain.document_loaders.tencent_cos_file
langchain.document_loaders.text
langchain.document_loaders.tomarkdown
langchain.document_loaders.toml
langchain.document_loaders.trello
langchain.document_loaders.twitter
langchain.document_loaders.unstructured
langchain.document_loaders.url
langchain.document_loaders.url_playwright
langchain.document_loaders.url_selenium
langchain.document_loaders.weather
langchain.document_loaders.web_base
langchain.document_loaders.whatsapp_chat
langchain.document_loaders.wikipedia
langchain.document_loaders.word_document
langchain.document_loaders.xml
langchain.document_loaders.youtube
langchain.document_transformers
langchain.embeddings.aleph_alpha
langchain.embeddings.base
langchain.embeddings.bedrock
langchain.embeddings.clarifai
langchain.embeddings.cohere
langchain.embeddings.dashscope
langchain.embeddings.deepinfra
langchain.embeddings.elasticsearch
langchain.embeddings.embaas
langchain.embeddings.fake
langchain.embeddings.google_palm
langchain.embeddings.huggingface
langchain.embeddings.huggingface_hub
langchain.embeddings.jina
langchain.embeddings.llamacpp
langchain.embeddings.minimax
langchain.embeddings.modelscope_hub
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-7
|
langchain.embeddings.minimax
langchain.embeddings.modelscope_hub
langchain.embeddings.mosaicml
langchain.embeddings.octoai_embeddings
langchain.embeddings.openai
langchain.embeddings.sagemaker_endpoint
langchain.embeddings.self_hosted
langchain.embeddings.self_hosted_hugging_face
langchain.embeddings.spacy_embeddings
langchain.embeddings.tensorflow_hub
langchain.embeddings.vertexai
langchain.env
langchain.evaluation.agents.trajectory_eval_chain
langchain.evaluation.comparison.eval_chain
langchain.evaluation.criteria.eval_chain
langchain.evaluation.embedding_distance.base
langchain.evaluation.loading
langchain.evaluation.qa.eval_chain
langchain.evaluation.qa.generate_chain
langchain.evaluation.run_evaluators.base
langchain.evaluation.run_evaluators.implementations
langchain.evaluation.run_evaluators.loading
langchain.evaluation.run_evaluators.string_run_evaluator
langchain.evaluation.schema
langchain.evaluation.string_distance.base
langchain.example_generator
langchain.experimental.autonomous_agents.autogpt.memory
langchain.experimental.autonomous_agents.autogpt.output_parser
langchain.experimental.autonomous_agents.autogpt.prompt
langchain.experimental.autonomous_agents.autogpt.prompt_generator
langchain.experimental.autonomous_agents.baby_agi.baby_agi
langchain.experimental.autonomous_agents.baby_agi.task_creation
langchain.experimental.autonomous_agents.baby_agi.task_execution
langchain.experimental.autonomous_agents.baby_agi.task_prioritization
langchain.experimental.generative_agents.generative_agent
langchain.experimental.generative_agents.memory
langchain.experimental.llms.jsonformer_decoder
langchain.experimental.llms.rellm_decoder
langchain.experimental.plan_and_execute.agent_executor
langchain.experimental.plan_and_execute.executors.agent_executor
langchain.experimental.plan_and_execute.executors.base
langchain.experimental.plan_and_execute.planners.base
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-8
|
langchain.experimental.plan_and_execute.executors.base
langchain.experimental.plan_and_execute.planners.base
langchain.experimental.plan_and_execute.planners.chat_planner
langchain.experimental.plan_and_execute.schema
langchain.formatting
langchain.graphs.networkx_graph
langchain.indexes.graph
langchain.indexes.vectorstore
langchain.input
langchain.llms.ai21
langchain.llms.aleph_alpha
langchain.llms.amazon_api_gateway
langchain.llms.anthropic
langchain.llms.anyscale
langchain.llms.aviary
langchain.llms.azureml_endpoint
langchain.llms.bananadev
langchain.llms.base
langchain.llms.baseten
langchain.llms.beam
langchain.llms.bedrock
langchain.llms.cerebriumai
langchain.llms.clarifai
langchain.llms.cohere
langchain.llms.ctransformers
langchain.llms.databricks
langchain.llms.deepinfra
langchain.llms.fake
langchain.llms.forefrontai
langchain.llms.google_palm
langchain.llms.gooseai
langchain.llms.gpt4all
langchain.llms.huggingface_endpoint
langchain.llms.huggingface_hub
langchain.llms.huggingface_pipeline
langchain.llms.huggingface_text_gen_inference
langchain.llms.human
langchain.llms.llamacpp
langchain.llms.loading
langchain.llms.manifest
langchain.llms.modal
langchain.llms.mosaicml
langchain.llms.nlpcloud
langchain.llms.octoai_endpoint
langchain.llms.openai
langchain.llms.openllm
langchain.llms.openlm
langchain.llms.petals
langchain.llms.pipelineai
langchain.llms.predictionguard
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-9
|
langchain.llms.pipelineai
langchain.llms.predictionguard
langchain.llms.promptlayer_openai
langchain.llms.replicate
langchain.llms.rwkv
langchain.llms.sagemaker_endpoint
langchain.llms.self_hosted
langchain.llms.self_hosted_hugging_face
langchain.llms.stochasticai
langchain.llms.textgen
langchain.llms.utils
langchain.llms.vertexai
langchain.llms.writer
langchain.load.dump
langchain.load.load
langchain.load.serializable
langchain.math_utils
langchain.memory.buffer
langchain.memory.buffer_window
langchain.memory.chat_memory
langchain.memory.chat_message_histories.cassandra
langchain.memory.chat_message_histories.cosmos_db
langchain.memory.chat_message_histories.dynamodb
langchain.memory.chat_message_histories.file
langchain.memory.chat_message_histories.firestore
langchain.memory.chat_message_histories.in_memory
langchain.memory.chat_message_histories.momento
langchain.memory.chat_message_histories.mongodb
langchain.memory.chat_message_histories.postgres
langchain.memory.chat_message_histories.redis
langchain.memory.chat_message_histories.sql
langchain.memory.chat_message_histories.zep
langchain.memory.combined
langchain.memory.entity
langchain.memory.kg
langchain.memory.motorhead_memory
langchain.memory.readonly
langchain.memory.simple
langchain.memory.summary
langchain.memory.summary_buffer
langchain.memory.token_buffer
langchain.memory.utils
langchain.memory.vectorstore
langchain.output_parsers.boolean
langchain.output_parsers.combining
langchain.output_parsers.datetime
langchain.output_parsers.enum
langchain.output_parsers.fix
langchain.output_parsers.json
langchain.output_parsers.list
langchain.output_parsers.loading
langchain.output_parsers.openai_functions
langchain.output_parsers.pydantic
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-10
|
langchain.output_parsers.openai_functions
langchain.output_parsers.pydantic
langchain.output_parsers.rail_parser
langchain.output_parsers.regex
langchain.output_parsers.regex_dict
langchain.output_parsers.retry
langchain.output_parsers.structured
langchain.prompts.base
langchain.prompts.chat
langchain.prompts.example_selector.base
langchain.prompts.example_selector.length_based
langchain.prompts.example_selector.ngram_overlap
langchain.prompts.example_selector.semantic_similarity
langchain.prompts.few_shot
langchain.prompts.few_shot_with_templates
langchain.prompts.loading
langchain.prompts.pipeline
langchain.prompts.prompt
langchain.requests
langchain.retrievers.arxiv
langchain.retrievers.azure_cognitive_search
langchain.retrievers.chaindesk
langchain.retrievers.chatgpt_plugin_retriever
langchain.retrievers.contextual_compression
langchain.retrievers.databerry
langchain.retrievers.docarray
langchain.retrievers.document_compressors.base
langchain.retrievers.document_compressors.chain_extract
langchain.retrievers.document_compressors.chain_filter
langchain.retrievers.document_compressors.cohere_rerank
langchain.retrievers.document_compressors.embeddings_filter
langchain.retrievers.elastic_search_bm25
langchain.retrievers.kendra
langchain.retrievers.knn
langchain.retrievers.llama_index
langchain.retrievers.merger_retriever
langchain.retrievers.metal
langchain.retrievers.milvus
langchain.retrievers.multi_query
langchain.retrievers.pinecone_hybrid_search
langchain.retrievers.pubmed
langchain.retrievers.remote_retriever
langchain.retrievers.self_query.base
langchain.retrievers.self_query.chroma
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-11
|
langchain.retrievers.self_query.base
langchain.retrievers.self_query.chroma
langchain.retrievers.self_query.myscale
langchain.retrievers.self_query.pinecone
langchain.retrievers.self_query.qdrant
langchain.retrievers.self_query.weaviate
langchain.retrievers.svm
langchain.retrievers.tfidf
langchain.retrievers.time_weighted_retriever
langchain.retrievers.vespa_retriever
langchain.retrievers.weaviate_hybrid_search
langchain.retrievers.wikipedia
langchain.retrievers.zep
langchain.retrievers.zilliz
langchain.schema.agent
langchain.schema.document
langchain.schema.language_model
langchain.schema.memory
langchain.schema.messages
langchain.schema.output
langchain.schema.output_parser
langchain.schema.prompt
langchain.schema.prompt_template
langchain.schema.retriever
langchain.server
langchain.sql_database
langchain.text_splitter
langchain.tools.arxiv.tool
langchain.tools.azure_cognitive_services.form_recognizer
langchain.tools.azure_cognitive_services.image_analysis
langchain.tools.azure_cognitive_services.speech2text
langchain.tools.azure_cognitive_services.text2speech
langchain.tools.azure_cognitive_services.utils
langchain.tools.base
langchain.tools.bing_search.tool
langchain.tools.brave_search.tool
langchain.tools.convert_to_openai
langchain.tools.dataforseo_api_search.tool
langchain.tools.ddg_search.tool
langchain.tools.file_management.copy
langchain.tools.file_management.delete
langchain.tools.file_management.file_search
langchain.tools.file_management.list_dir
langchain.tools.file_management.move
langchain.tools.file_management.read
langchain.tools.file_management.utils
langchain.tools.file_management.write
langchain.tools.gmail.base
langchain.tools.gmail.create_draft
langchain.tools.gmail.get_message
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-12
|
langchain.tools.gmail.base
langchain.tools.gmail.create_draft
langchain.tools.gmail.get_message
langchain.tools.gmail.get_thread
langchain.tools.gmail.search
langchain.tools.gmail.send_message
langchain.tools.gmail.utils
langchain.tools.google_places.tool
langchain.tools.google_search.tool
langchain.tools.google_serper.tool
langchain.tools.graphql.tool
langchain.tools.human.tool
langchain.tools.ifttt
langchain.tools.interaction.tool
langchain.tools.jira.tool
langchain.tools.json.tool
langchain.tools.metaphor_search.tool
langchain.tools.office365.base
langchain.tools.office365.create_draft_message
langchain.tools.office365.events_search
langchain.tools.office365.messages_search
langchain.tools.office365.send_event
langchain.tools.office365.send_message
langchain.tools.office365.utils
langchain.tools.openapi.utils.api_models
langchain.tools.openweathermap.tool
langchain.tools.playwright.base
langchain.tools.playwright.click
langchain.tools.playwright.current_page
langchain.tools.playwright.extract_hyperlinks
langchain.tools.playwright.extract_text
langchain.tools.playwright.get_elements
langchain.tools.playwright.navigate
langchain.tools.playwright.navigate_back
langchain.tools.playwright.utils
langchain.tools.plugin
langchain.tools.powerbi.tool
langchain.tools.pubmed.tool
langchain.tools.python.tool
langchain.tools.requests.tool
langchain.tools.scenexplain.tool
langchain.tools.searx_search.tool
langchain.tools.shell.tool
langchain.tools.sleep.tool
langchain.tools.spark_sql.tool
langchain.tools.sql_database.tool
langchain.tools.steamship_image_generation.tool
langchain.tools.steamship_image_generation.utils
langchain.tools.vectorstore.tool
langchain.tools.wikipedia.tool
langchain.tools.wolfram_alpha.tool
langchain.tools.youtube.search
langchain.tools.zapier.tool
langchain.utilities.apify
langchain.utilities.arxiv
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-13
|
langchain.tools.zapier.tool
langchain.utilities.apify
langchain.utilities.arxiv
langchain.utilities.awslambda
langchain.utilities.bibtex
langchain.utilities.bing_search
langchain.utilities.brave_search
langchain.utilities.dataforseo_api_search
langchain.utilities.duckduckgo_search
langchain.utilities.google_places_api
langchain.utilities.google_search
langchain.utilities.google_serper
langchain.utilities.graphql
langchain.utilities.jira
langchain.utilities.loading
langchain.utilities.metaphor_search
langchain.utilities.openapi
langchain.utilities.openweathermap
langchain.utilities.powerbi
langchain.utilities.pupmed
langchain.utilities.python
langchain.utilities.scenexplain
langchain.utilities.searx_search
langchain.utilities.serpapi
langchain.utilities.twilio
langchain.utilities.vertexai
langchain.utilities.wikipedia
langchain.utilities.wolfram_alpha
langchain.utilities.zapier
langchain.utils
langchain.vectorstores.alibabacloud_opensearch
langchain.vectorstores.analyticdb
langchain.vectorstores.annoy
langchain.vectorstores.atlas
langchain.vectorstores.awadb
langchain.vectorstores.azuresearch
langchain.vectorstores.base
langchain.vectorstores.cassandra
langchain.vectorstores.chroma
langchain.vectorstores.clarifai
langchain.vectorstores.clickhouse
langchain.vectorstores.deeplake
langchain.vectorstores.docarray.base
langchain.vectorstores.docarray.hnsw
langchain.vectorstores.docarray.in_memory
langchain.vectorstores.elastic_vector_search
langchain.vectorstores.faiss
langchain.vectorstores.hologres
langchain.vectorstores.lancedb
langchain.vectorstores.marqo
langchain.vectorstores.matching_engine
langchain.vectorstores.milvus
langchain.vectorstores.mongodb_atlas
langchain.vectorstores.myscale
langchain.vectorstores.opensearch_vector_search
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
f9856cfcd351-14
|
langchain.vectorstores.myscale
langchain.vectorstores.opensearch_vector_search
langchain.vectorstores.pgembedding
langchain.vectorstores.pgvector
langchain.vectorstores.pinecone
langchain.vectorstores.qdrant
langchain.vectorstores.redis
langchain.vectorstores.rocksetdb
langchain.vectorstores.singlestoredb
langchain.vectorstores.sklearn
langchain.vectorstores.starrocks
langchain.vectorstores.supabase
langchain.vectorstores.tair
langchain.vectorstores.tigris
langchain.vectorstores.typesense
langchain.vectorstores.utils
langchain.vectorstores.vectara
langchain.vectorstores.weaviate
langchain.vectorstores.zilliz
pydantic.config
pydantic.env_settings
pydantic.utils
|
https://api.python.langchain.com/en/latest/_modules/index.html
|
ee466c2e94b0-0
|
Source code for langchain.formatting
"""Utilities for formatting strings."""
from string import Formatter
from typing import Any, List, Mapping, Sequence, Union
[docs]class StrictFormatter(Formatter):
"""A subclass of formatter that checks for extra keys."""
[docs] def check_unused_args(
self,
used_args: Sequence[Union[int, str]],
args: Sequence,
kwargs: Mapping[str, Any],
) -> None:
"""Check to see if extra parameters are passed."""
extra = set(kwargs).difference(used_args)
if extra:
raise KeyError(extra)
[docs] def vformat(
self, format_string: str, args: Sequence, kwargs: Mapping[str, Any]
) -> str:
"""Check that no arguments are provided."""
if len(args) > 0:
raise ValueError(
"No arguments should be provided, "
"everything should be passed as keyword arguments."
)
return super().vformat(format_string, args, kwargs)
[docs] def validate_input_variables(
self, format_string: str, input_variables: List[str]
) -> None:
dummy_inputs = {input_variable: "foo" for input_variable in input_variables}
super().format(format_string, **dummy_inputs)
formatter = StrictFormatter()
|
https://api.python.langchain.com/en/latest/_modules/langchain/formatting.html
|
aea7c4a66cf0-0
|
Source code for langchain.server
"""Script to run langchain-server locally using docker-compose."""
import subprocess
from pathlib import Path
from langchainplus_sdk.cli.main import get_docker_compose_command
[docs]def main() -> None:
"""Run the langchain server locally."""
p = Path(__file__).absolute().parent / "docker-compose.yaml"
docker_compose_command = get_docker_compose_command()
subprocess.run([*docker_compose_command, "-f", str(p), "pull"])
subprocess.run([*docker_compose_command, "-f", str(p), "up"])
if __name__ == "__main__":
main()
|
https://api.python.langchain.com/en/latest/_modules/langchain/server.html
|
f006edbd588c-0
|
Source code for langchain.document_transformers
"""Transform documents"""
from typing import Any, Callable, List, Sequence
import numpy as np
from pydantic import BaseModel, Field
from langchain.embeddings.base import Embeddings
from langchain.math_utils import cosine_similarity
from langchain.schema import BaseDocumentTransformer, Document
class _DocumentWithState(Document):
"""Wrapper for a document that includes arbitrary state."""
state: dict = Field(default_factory=dict)
"""State associated with the document."""
def to_document(self) -> Document:
"""Convert the DocumentWithState to a Document."""
return Document(page_content=self.page_content, metadata=self.metadata)
@classmethod
def from_document(cls, doc: Document) -> "_DocumentWithState":
"""Create a DocumentWithState from a Document."""
if isinstance(doc, cls):
return doc
return cls(page_content=doc.page_content, metadata=doc.metadata)
[docs]def get_stateful_documents(
documents: Sequence[Document],
) -> Sequence[_DocumentWithState]:
"""Convert a list of documents to a list of documents with state.
Args:
documents: The documents to convert.
Returns:
A list of documents with state.
"""
return [_DocumentWithState.from_document(doc) for doc in documents]
def _filter_similar_embeddings(
embedded_documents: List[List[float]], similarity_fn: Callable, threshold: float
) -> List[int]:
"""Filter redundant documents based on the similarity of their embeddings."""
similarity = np.tril(similarity_fn(embedded_documents, embedded_documents), k=-1)
redundant = np.where(similarity > threshold)
redundant_stacked = np.column_stack(redundant)
|
https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html
|
f006edbd588c-1
|
redundant_stacked = np.column_stack(redundant)
redundant_sorted = np.argsort(similarity[redundant])[::-1]
included_idxs = set(range(len(embedded_documents)))
for first_idx, second_idx in redundant_stacked[redundant_sorted]:
if first_idx in included_idxs and second_idx in included_idxs:
# Default to dropping the second document of any highly similar pair.
included_idxs.remove(second_idx)
return list(sorted(included_idxs))
def _get_embeddings_from_stateful_docs(
embeddings: Embeddings, documents: Sequence[_DocumentWithState]
) -> List[List[float]]:
if len(documents) and "embedded_doc" in documents[0].state:
embedded_documents = [doc.state["embedded_doc"] for doc in documents]
else:
embedded_documents = embeddings.embed_documents(
[d.page_content for d in documents]
)
for doc, embedding in zip(documents, embedded_documents):
doc.state["embedded_doc"] = embedding
return embedded_documents
def _filter_cluster_embeddings(
embedded_documents: List[List[float]],
num_clusters: int,
num_closest: int,
random_state: int,
remove_duplicates: bool,
) -> List[int]:
"""Filter documents based on proximity of their embeddings to clusters."""
try:
from sklearn.cluster import KMeans
except ImportError:
raise ValueError(
"sklearn package not found, please install it with "
"`pip install scikit-learn`"
)
kmeans = KMeans(n_clusters=num_clusters, random_state=random_state).fit(
embedded_documents
)
closest_indices = []
# Loop through the number of clusters you have
|
https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html
|
f006edbd588c-2
|
)
closest_indices = []
# Loop through the number of clusters you have
for i in range(num_clusters):
# Get the list of distances from that particular cluster center
distances = np.linalg.norm(
embedded_documents - kmeans.cluster_centers_[i], axis=1
)
# Find the indices of the two unique closest ones
# (using argsort to find the smallest 2 distances)
if remove_duplicates:
# Only add not duplicated vectors.
closest_indices_sorted = [
x
for x in np.argsort(distances)[:num_closest]
if x not in closest_indices
]
else:
# Skip duplicates and add the next closest vector.
closest_indices_sorted = [
x for x in np.argsort(distances) if x not in closest_indices
][:num_closest]
# Append that position closest indices list
closest_indices.extend(closest_indices_sorted)
return closest_indices
[docs]class EmbeddingsRedundantFilter(BaseDocumentTransformer, BaseModel):
"""Filter that drops redundant documents by comparing their embeddings."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents."""
similarity_fn: Callable = cosine_similarity
"""Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity."""
similarity_threshold: float = 0.95
"""Threshold for determining when two documents are similar enough
to be considered redundant."""
[docs] class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
|
https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html
|
f006edbd588c-3
|
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Filter down documents."""
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_similar_embeddings(
embedded_documents, self.similarity_fn, self.similarity_threshold
)
return [stateful_documents[i] for i in sorted(included_idxs)]
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
[docs]class EmbeddingsClusteringFilter(BaseDocumentTransformer, BaseModel):
"""Perform K-means clustering on document vectors.
Returns an arbitrary number of documents closest to center."""
embeddings: Embeddings
"""Embeddings to use for embedding document contents."""
num_clusters: int = 5
"""Number of clusters. Groups of documents with similar meaning."""
num_closest: int = 1
"""The number of closest vectors to return for each cluster center."""
random_state: int = 42
"""Controls the random number generator used to initialize the cluster centroids.
If you set the random_state parameter to None, the KMeans algorithm will use a
random number generator that is seeded with the current time. This means
that the results of the KMeans algorithm will be different each time you
run it."""
sorted: bool = False
"""By default results are re-ordered "grouping" them by cluster, if sorted is true
result will be ordered by the original position from the retriever"""
remove_duplicates = False
|
https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html
|
f006edbd588c-4
|
remove_duplicates = False
""" By default duplicated results are skipped and replaced by the next closest
vector in the cluster. If remove_duplicates is true no replacement will be done:
This could dramatically reduce results when there is a lot of overlap beetween
clusters.
"""
[docs] class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Filter down documents."""
stateful_documents = get_stateful_documents(documents)
embedded_documents = _get_embeddings_from_stateful_docs(
self.embeddings, stateful_documents
)
included_idxs = _filter_cluster_embeddings(
embedded_documents,
self.num_clusters,
self.num_closest,
self.random_state,
self.remove_duplicates,
)
results = sorted(included_idxs) if self.sorted else included_idxs
return [stateful_documents[i] for i in results]
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
raise NotImplementedError
|
https://api.python.langchain.com/en/latest/_modules/langchain/document_transformers.html
|
88e6d621d2c5-0
|
Source code for langchain.requests
"""Lightweight wrapper around requests library, with async support."""
from contextlib import asynccontextmanager
from typing import Any, AsyncGenerator, Dict, Optional
import aiohttp
import requests
from pydantic import BaseModel, Extra
[docs]class Requests(BaseModel):
"""Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
[docs] class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
[docs] def get(self, url: str, **kwargs: Any) -> requests.Response:
"""GET the URL and return the text."""
return requests.get(url, headers=self.headers, **kwargs)
[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""POST to the URL and return the text."""
return requests.post(url, json=data, headers=self.headers, **kwargs)
[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PATCH the URL and return the text."""
return requests.patch(url, json=data, headers=self.headers, **kwargs)
[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PUT the URL and return the text."""
return requests.put(url, json=data, headers=self.headers, **kwargs)
|
https://api.python.langchain.com/en/latest/_modules/langchain/requests.html
|
88e6d621d2c5-1
|
return requests.put(url, json=data, headers=self.headers, **kwargs)
[docs] def delete(self, url: str, **kwargs: Any) -> requests.Response:
"""DELETE the URL and return the text."""
return requests.delete(url, headers=self.headers, **kwargs)
@asynccontextmanager
async def _arequest(
self, method: str, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""Make an async request."""
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.request(
method, url, headers=self.headers, **kwargs
) as response:
yield response
else:
async with self.aiosession.request(
method, url, headers=self.headers, **kwargs
) as response:
yield response
[docs] @asynccontextmanager
async def aget(
self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""GET the URL and return the text asynchronously."""
async with self._arequest("GET", url, **kwargs) as response:
yield response
[docs] @asynccontextmanager
async def apost(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""POST to the URL and return the text asynchronously."""
async with self._arequest("POST", url, json=data, **kwargs) as response:
yield response
[docs] @asynccontextmanager
async def apatch(
|
https://api.python.langchain.com/en/latest/_modules/langchain/requests.html
|
88e6d621d2c5-2
|
yield response
[docs] @asynccontextmanager
async def apatch(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PATCH the URL and return the text asynchronously."""
async with self._arequest("PATCH", url, json=data, **kwargs) as response:
yield response
[docs] @asynccontextmanager
async def aput(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PUT the URL and return the text asynchronously."""
async with self._arequest("PUT", url, json=data, **kwargs) as response:
yield response
[docs] @asynccontextmanager
async def adelete(
self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""DELETE the URL and return the text asynchronously."""
async with self._arequest("DELETE", url, **kwargs) as response:
yield response
[docs]class TextRequestsWrapper(BaseModel):
"""Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
[docs] class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def requests(self) -> Requests:
return Requests(headers=self.headers, aiosession=self.aiosession)
|
https://api.python.langchain.com/en/latest/_modules/langchain/requests.html
|
88e6d621d2c5-3
|
return Requests(headers=self.headers, aiosession=self.aiosession)
[docs] def get(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text."""
return self.requests.get(url, **kwargs).text
[docs] def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text."""
return self.requests.post(url, data, **kwargs).text
[docs] def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PATCH the URL and return the text."""
return self.requests.patch(url, data, **kwargs).text
[docs] def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text."""
return self.requests.put(url, data, **kwargs).text
[docs] def delete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text."""
return self.requests.delete(url, **kwargs).text
[docs] async def aget(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text asynchronously."""
async with self.requests.aget(url, **kwargs) as response:
return await response.text()
[docs] async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text asynchronously."""
async with self.requests.apost(url, data, **kwargs) as response:
return await response.text()
|
https://api.python.langchain.com/en/latest/_modules/langchain/requests.html
|
88e6d621d2c5-4
|
return await response.text()
[docs] async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PATCH the URL and return the text asynchronously."""
async with self.requests.apatch(url, data, **kwargs) as response:
return await response.text()
[docs] async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text asynchronously."""
async with self.requests.aput(url, data, **kwargs) as response:
return await response.text()
[docs] async def adelete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text asynchronously."""
async with self.requests.adelete(url, **kwargs) as response:
return await response.text()
# For backwards compatibility
RequestsWrapper = TextRequestsWrapper
|
https://api.python.langchain.com/en/latest/_modules/langchain/requests.html
|
ca5dfb8d2c94-0
|
Source code for langchain.utils
"""Generic utility functions."""
import contextlib
import datetime
import importlib
import os
from importlib.metadata import version
from typing import Any, Callable, Dict, List, Optional, Tuple
from packaging.version import parse
from requests import HTTPError, Response
[docs]def get_from_dict_or_env(
data: Dict[str, Any], key: str, env_key: str, default: Optional[str] = None
) -> str:
"""Get a value from a dictionary or an environment variable."""
if key in data and data[key]:
return data[key]
else:
return get_from_env(key, env_key, default=default)
[docs]def get_from_env(key: str, env_key: str, default: Optional[str] = None) -> str:
"""Get a value from a dictionary or an environment variable."""
if env_key in os.environ and os.environ[env_key]:
return os.environ[env_key]
elif default is not None:
return default
else:
raise ValueError(
f"Did not find {key}, please add an environment variable"
f" `{env_key}` which contains it, or pass"
f" `{key}` as a named parameter."
)
[docs]def xor_args(*arg_groups: Tuple[str, ...]) -> Callable:
"""Validate specified keyword args are mutually exclusive."""
def decorator(func: Callable) -> Callable:
def wrapper(*args: Any, **kwargs: Any) -> Callable:
"""Validate exactly one arg in each group is not None."""
counts = [
sum(1 for arg in arg_group if kwargs.get(arg) is not None)
for arg_group in arg_groups
]
|
https://api.python.langchain.com/en/latest/_modules/langchain/utils.html
|
ca5dfb8d2c94-1
|
for arg_group in arg_groups
]
invalid_groups = [i for i, count in enumerate(counts) if count != 1]
if invalid_groups:
invalid_group_names = [", ".join(arg_groups[i]) for i in invalid_groups]
raise ValueError(
"Exactly one argument in each of the following"
" groups must be defined:"
f" {', '.join(invalid_group_names)}"
)
return func(*args, **kwargs)
return wrapper
return decorator
[docs]def raise_for_status_with_text(response: Response) -> None:
"""Raise an error with the response text."""
try:
response.raise_for_status()
except HTTPError as e:
raise ValueError(response.text) from e
[docs]def stringify_value(val: Any) -> str:
"""Stringify a value.
Args:
val: The value to stringify.
Returns:
str: The stringified value.
"""
if isinstance(val, str):
return val
elif isinstance(val, dict):
return "\n" + stringify_dict(val)
elif isinstance(val, list):
return "\n".join(stringify_value(v) for v in val)
else:
return str(val)
[docs]def stringify_dict(data: dict) -> str:
"""Stringify a dictionary.
Args:
data: The dictionary to stringify.
Returns:
str: The stringified dictionary.
"""
text = ""
for key, value in data.items():
text += key + ": " + stringify_value(value) + "\n"
return text
[docs]def comma_list(items: List[Any]) -> str:
|
https://api.python.langchain.com/en/latest/_modules/langchain/utils.html
|
ca5dfb8d2c94-2
|
return text
[docs]def comma_list(items: List[Any]) -> str:
return ", ".join(str(item) for item in items)
[docs]@contextlib.contextmanager
def mock_now(dt_value): # type: ignore
"""Context manager for mocking out datetime.now() in unit tests.
Example:
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11)
"""
class MockDateTime(datetime.datetime):
"""Mock datetime.datetime.now() with a fixed datetime."""
@classmethod
def now(cls): # type: ignore
# Create a copy of dt_value.
return datetime.datetime(
dt_value.year,
dt_value.month,
dt_value.day,
dt_value.hour,
dt_value.minute,
dt_value.second,
dt_value.microsecond,
dt_value.tzinfo,
)
real_datetime = datetime.datetime
datetime.datetime = MockDateTime
try:
yield datetime.datetime
finally:
datetime.datetime = real_datetime
[docs]def guard_import(
module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None
) -> Any:
"""Dynamically imports a module and raises a helpful exception if the module is not
installed."""
try:
module = importlib.import_module(module_name, package)
except ImportError:
raise ImportError(
f"Could not import {module_name} python package. "
f"Please install it with `pip install {pip_name or module_name}`."
)
return module
|
https://api.python.langchain.com/en/latest/_modules/langchain/utils.html
|
ca5dfb8d2c94-3
|
)
return module
[docs]def check_package_version(
package: str,
lt_version: Optional[str] = None,
lte_version: Optional[str] = None,
gt_version: Optional[str] = None,
gte_version: Optional[str] = None,
) -> None:
"""Check the version of a package."""
imported_version = parse(version(package))
if lt_version is not None and imported_version >= parse(lt_version):
raise ValueError(
f"Expected {package} version to be < {lt_version}. Received "
f"{imported_version}."
)
if lte_version is not None and imported_version > parse(lte_version):
raise ValueError(
f"Expected {package} version to be <= {lte_version}. Received "
f"{imported_version}."
)
if gt_version is not None and imported_version <= parse(gt_version):
raise ValueError(
f"Expected {package} version to be > {gt_version}. Received "
f"{imported_version}."
)
if gte_version is not None and imported_version < parse(gte_version):
raise ValueError(
f"Expected {package} version to be >= {gte_version}. Received "
f"{imported_version}."
)
|
https://api.python.langchain.com/en/latest/_modules/langchain/utils.html
|
9c0a702e9113-0
|
Source code for langchain.text_splitter
"""Functionality for splitting text."""
from __future__ import annotations
import copy
import logging
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Iterable,
List,
Literal,
Optional,
Sequence,
Tuple,
Type,
TypedDict,
TypeVar,
Union,
cast,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
def _split_text_with_regex(
text: str, separator: str, keep_separator: bool
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = re.split(separator, text)
else:
splits = list(text)
return [s for s in splits if s != ""]
[docs]class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
def __init__(
self,
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-1
|
"""Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: bool = False,
add_start_index: bool = False,
) -> None:
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether to keep the separator in the chunks
add_start_index: If `True`, includes chunk's start index in metadata
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
self._add_start_index = add_start_index
[docs] @abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
[docs] def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = -1
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i])
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-2
|
metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
index = text.find(chunk, index + 1)
metadata["start_index"] = index
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-3
|
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
[docs] @classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-4
|
"Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
[docs] @classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
[docs] def transform_documents(
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-5
|
[docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
[docs]class CharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters."""
def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
splits = _split_text_with_regex(text, self._separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
[docs]class LineType(TypedDict):
"""Line type as typed dict."""
metadata: Dict[str, str]
content: str
[docs]class HeaderType(TypedDict):
"""Header type as typed dict."""
level: int
name: str
data: str
class MarkdownHeaderTextSplitter:
"""Implementation of splitting markdown files based on specified headers."""
def __init__(
self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False
):
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-6
|
):
"""Create a new MarkdownHeaderTextSplitter.
Args:
headers_to_split_on: Headers we want to track
return_each_line: Return each line w/ associated headers
"""
# Output line-by-line or aggregated into chunks w/ common headers
self.return_each_line = return_each_line
# Given the headers we want to split on,
# (e.g., "#, ##, etc") order by length
self.headers_to_split_on = sorted(
headers_to_split_on, key=lambda split: len(split[0]), reverse=True
)
def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:
"""Combine lines with common metadata into chunks
Args:
lines: Line of text / associated header metadata
"""
aggregated_chunks: List[LineType] = []
for line in lines:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == line["metadata"]
):
# If the last line in the aggregated list
# has the same metadata as the current line,
# append the current content to the last lines's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
else:
# Otherwise, append the current line to the aggregated list
aggregated_chunks.append(line)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
def split_text(self, text: str) -> List[Document]:
"""Split markdown file
Args:
text: Markdown file"""
# Split the input text by newline character ("\n").
lines = text.split("\n")
# Final output
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-7
|
lines = text.split("\n")
# Final output
lines_with_metadata: List[LineType] = []
# Content and metadata of the chunk currently being processed
current_content: List[str] = []
current_metadata: Dict[str, str] = {}
# Keep track of the nested header structure
# header_stack: List[Dict[str, Union[int, str]]] = []
header_stack: List[HeaderType] = []
initial_metadata: Dict[str, str] = {}
for line in lines:
stripped_line = line.strip()
# Check each line against each of the header types (e.g., #, ##)
for sep, name in self.headers_to_split_on:
# Check if line starts with a header that we intend to split on
if stripped_line.startswith(sep) and (
# Header with no text OR header is followed by space
# Both are valid conditions that sep is being used a header
len(stripped_line) == len(sep)
or stripped_line[len(sep)] == " "
):
# Ensure we are tracking the header as metadata
if name is not None:
# Get the current header level
current_header_level = sep.count("#")
# Pop out headers of lower or same level from the stack
while (
header_stack
and header_stack[-1]["level"] >= current_header_level
):
# We have encountered a new header
# at the same or higher level
popped_header = header_stack.pop()
# Clear the metadata for the
# popped header in initial_metadata
if popped_header["name"] in initial_metadata:
initial_metadata.pop(popped_header["name"])
# Push the current header to the stack
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-8
|
# Push the current header to the stack
header: HeaderType = {
"level": current_header_level,
"name": name,
"data": stripped_line[len(sep) :].strip(),
}
header_stack.append(header)
# Update initial_metadata with the current header
initial_metadata[name] = header["data"]
# Add the previous line to the lines_with_metadata
# only if current_content is not empty
if current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
break
else:
if stripped_line:
current_content.append(stripped_line)
elif current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
current_metadata = initial_metadata.copy()
if current_content:
lines_with_metadata.append(
{"content": "\n".join(current_content), "metadata": current_metadata}
)
# lines_with_metadata has each line with associated header metadata
# aggregate these into chunks based on common metadata
if not self.return_each_line:
return self.aggregate_lines_to_chunks(lines_with_metadata)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in lines_with_metadata
]
# should be in newer Python versions (3.10+)
# @dataclass(frozen=True, kw_only=True, slots=True)
@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-9
|
@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int
tokens_per_chunk: int
decode: Callable[[list[int]], str]
encode: Callable[[str], List[int]]
[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:
"""Split incoming text and return chunks."""
splits: List[str] = []
input_ids = tokenizer.encode(text)
start_idx = 0
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(tokenizer.decode(chunk_ids))
start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
[docs]class TokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-10
|
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
[docs] def split_text(self, text: str) -> List[str]:
def _encode(_text: str) -> List[int]:
return self._tokenizer.encode(
_text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self._chunk_size,
decode=self._tokenizer.decode,
encode=_encode,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
try:
from sentence_transformers import SentenceTransformer
except ImportError:
raise ImportError(
"Could not import sentence_transformer python package. "
"This is needed in order to for SentenceTransformersTokenTextSplitter. "
"Please install it with `pip install sentence-transformers`."
)
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-11
|
"Please install it with `pip install sentence-transformers`."
)
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
self.tokenizer = self._model.tokenizer
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration(
self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
[docs] def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs] def count_tokens(self, *, text: str) -> int:
return len(self._encode(text))
_max_length_equal_32_bit_integer = 2**32
def _encode(self, text: str) -> List[int]:
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-12
|
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
[docs]class Language(str, Enum):
"""Enum of the programming languages."""
CPP = "cpp"
GO = "go"
JAVA = "java"
JS = "js"
PHP = "php"
PROTO = "proto"
PYTHON = "python"
RST = "rst"
RUBY = "ruby"
RUST = "rust"
SCALA = "scala"
SWIFT = "swift"
MARKDOWN = "markdown"
LATEX = "latex"
HTML = "html"
SOL = "sol"
[docs]class RecursiveCharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: bool = True,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-13
|
final_chunks = []
# Get appropriate separator to use
separator = separators[-1]
new_separators = []
for i, _s in enumerate(separators):
if _s == "":
separator = _s
break
if re.search(_s, text):
separator = _s
new_separators = separators[i + 1 :]
break
splits = _split_text_with_regex(text, separator, self._keep_separator)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if not new_separators:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
[docs] def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
[docs] @classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, **kwargs)
[docs] @staticmethod
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-14
|
[docs] @staticmethod
def get_separators_for_language(language: Language) -> List[str]:
if language == Language.CPP:
return [
# Split along class definitions
"\nclass ",
# Split along function definitions
"\nvoid ",
"\nint ",
"\nfloat ",
"\ndouble ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.GO:
return [
# Split along function definitions
"\nfunc ",
"\nvar ",
"\nconst ",
"\ntype ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JAVA:
return [
# Split along class definitions
"\nclass ",
# Split along method definitions
"\npublic ",
"\nprotected ",
"\nprivate ",
"\nstatic ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JS:
return [
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
9c0a702e9113-15
|
"",
]
elif language == Language.JS:
return [
# Split along function definitions
"\nfunction ",
"\nconst ",
"\nlet ",
"\nvar ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\ndefault ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PHP:
return [
# Split along function definitions
"\nfunction ",
# Split along class definitions
"\nclass ",
# Split along control flow statements
"\nif ",
"\nforeach ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PROTO:
return [
# Split along message definitions
"\nmessage ",
# Split along service definitions
"\nservice ",
# Split along enum definitions
"\nenum ",
# Split along option definitions
"\noption ",
# Split along import statements
"\nimport ",
# Split along syntax declarations
"\nsyntax ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PYTHON:
return [
# First, try to split along class definitions
"\nclass ",
|
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.