Custom Components GalleryNEW
ExploreCustom Components GalleryNEW
ExploreNew to Gradio? Start here: Getting Started
See the Release History
gradio.Client(src, Ā·Ā·Ā·)
The main Client class for the Python client. This class is used to connect to a remote Gradio app and call its API endpoints.
from gradio_client import Client
client = Client("abidlabs/whisper-large-v2") # connecting to a Hugging Face Space
client.predict("test.mp4", api_name="/predict")
>> What a nice recording! # returns the result of the remote API call
client = Client("https://bec81a83-5b5c-471e.gradio.live") # connecting to a temporary Gradio share URL
job = client.submit("hello", api_name="/predict") # runs the prediction in a background thread
job.result()
>> 49 # returns the result of the remote API call (blocking call)
Parameter | Description |
---|---|
src str required | Either the name of the Hugging Face Space to load, (e.g. "abidlabs/whisper-large-v2") or the full URL (including "http" or "https") of the hosted Gradio app to load (e.g. "http://mydomain.com/app" or "https://bec81a83-5b5c-471e.gradio.live/"). |
hf_token str | None default: None | The Hugging Face token to use to access private Spaces. Automatically fetched if you are logged in via the Hugging Face Hub CLI. Obtain from: https://huggingface.co/settings/token |
max_workers int default: 40 | The maximum number of thread workers that can be used to make requests to the remote Gradio app simultaneously. |
serialize bool | None default: None | Deprecated. Please use the equivalent |
output_dir str | Path default: "/tmp/gradio" | The directory to save files that are downloaded from the remote API. If None, reads from the GRADIO_TEMP_DIR environment variable. Defaults to a temporary directory on your machine. |
verbose bool default: True | Whether the client should print statements to the console. |
auth tuple[str, str] | None default: None | |
headers dict[str, str] | None default: None | Additional headers to send to the remote Gradio app on every request. By default only the HF authorization and user-agent headers are sent. These headers will override the default headers if they have the same keys. |
upload_files bool default: True | Whether the client should treat input string filepath as files and upload them to the remote server. If False, the client will treat input string filepaths as strings always and not modify them, and files should be passed in explicitly using |
download_files bool default: True | Whether the client should download output files from the remote API and return them as string filepaths on the local machine. If False, the client will return a FileData dataclass object with the filepath on the remote machine instead. |
ssl_verify bool default: True | If False, skips certificate validation which allows the client to connect to Gradio apps that are using self-signed certificates. |
gradio.Client.predict(args, Ā·Ā·Ā·)
Calls the Gradio API and returns the result (this is a blocking call). <br>
from gradio_client import Client
client = Client(src="gradio/calculator")
client.predict(5, "add", 4, api_name="/predict")
>> 9.0
Parameter | Description |
---|---|
args <class 'inspect._empty'> required | The arguments to pass to the remote API. The order of the arguments must match the order of the inputs in the Gradio app. |
api_name str | None default: None | The name of the API endpoint to call starting with a leading slash, e.g. "/predict". Does not need to be provided if the Gradio app has only one named API endpoint. |
fn_index int | None default: None | As an alternative to api_name, this parameter takes the index of the API endpoint to call, e.g. 0. Both api_name and fn_index can be provided, but if they conflict, api_name will take precedence. |
gradio.Client.submit(args, Ā·Ā·Ā·)
Creates and returns a Job object which calls the Gradio API in a background thread. The job can be used to retrieve the status and result of the remote API call. <br>
from gradio_client import Client
client = Client(src="gradio/calculator")
job = client.submit(5, "add", 4, api_name="/predict")
job.status()
>> <Status.STARTING: 'STARTING'>
job.result() # blocking call
>> 9.0
Parameter | Description |
---|---|
args <class 'inspect._empty'> required | The arguments to pass to the remote API. The order of the arguments must match the order of the inputs in the Gradio app. |
api_name str | None default: None | The name of the API endpoint to call starting with a leading slash, e.g. "/predict". Does not need to be provided if the Gradio app has only one named API endpoint. |
fn_index int | None default: None | As an alternative to api_name, this parameter takes the index of the API endpoint to call, e.g. 0. Both api_name and fn_index can be provided, but if they conflict, api_name will take precedence. |
result_callbacks Callable | list[Callable] | None default: None | A callback function, or list of callback functions, to be called when the result is ready. If a list of functions is provided, they will be called in order. The return values from the remote API are provided as separate parameters into the callback. If None, no callback will be called. |
gradio.Client.view_api(Ā·Ā·Ā·)
Prints the usage info for the API. If the Gradio app has multiple API endpoints, the usage info for each endpoint will be printed separately. If return_format="dict" the info is returned in dictionary format, as shown in the example below. <br>
from gradio_client import Client
client = Client(src="gradio/calculator")
client.view_api(return_format="dict")
>> {
'named_endpoints': {
'/predict': {
'parameters': [
{
'label': 'num1',
'python_type': 'int | float',
'type_description': 'numeric value',
'component': 'Number',
'example_input': '5'
},
{
'label': 'operation',
'python_type': 'str',
'type_description': 'string value',
'component': 'Radio',
'example_input': 'add'
},
{
'label': 'num2',
'python_type': 'int | float',
'type_description': 'numeric value',
'component': 'Number',
'example_input': '5'
},
],
'returns': [
{
'label': 'output',
'python_type': 'int | float',
'type_description': 'numeric value',
'component': 'Number',
},
]
},
'/flag': {
'parameters': [
...
],
'returns': [
...
]
}
}
'unnamed_endpoints': {
2: {
'parameters': [
...
],
'returns': [
...
]
}
}
}
}
Parameter | Description |
---|---|
all_endpoints bool | None default: None | If True, prints information for both named and unnamed endpoints in the Gradio app. If False, will only print info about named endpoints. If None (default), will print info about named endpoints, unless there aren't any -- in which it will print info about unnamed endpoints. |
print_info bool default: True | If True, prints the usage info to the console. If False, does not print the usage info. |
return_format Literal[('dict', 'str')] | None default: None | If None, nothing is returned. If "str", returns the same string that would be printed to the console. If "dict", returns the usage info as a dictionary that can be programmatically parsed, and all endpoints are returned in the dictionary regardless of the value of |
gradio.Client.duplicate(from_id, Ā·Ā·Ā·)
Duplicates a Hugging Face Space under your account and returns a Client object for the new Space. No duplication is created if the Space already exists in your account (to override this, provide a new name for the new Space using to_id
). To use this method, you must provide an hf_token
or be logged in via the Hugging Face Hub CLI. <br> The new Space will be private by default and use the same hardware as the original Space. This can be changed by using the private
and hardware
parameters. For hardware upgrades (beyond the basic CPU tier), you may be required to provide billing information on Hugging Face: https://huggingface.co/settings/billing <br>
import os
from gradio_client import Client
HF_TOKEN = os.environ.get("HF_TOKEN")
client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
client.predict("audio_sample.wav")
>> "This is a test of the whisper speech recognition model."
Parameter | Description |
---|---|
from_id str required | The name of the Hugging Face Space to duplicate in the format " |
to_id str | None default: None | The name of the new Hugging Face Space to create, e.g. "abidlabs/whisper-duplicate". If not provided, the new Space will be named " |
hf_token str | None default: None | The Hugging Face token to use to access private Spaces. Automatically fetched if you are logged in via the Hugging Face Hub CLI. Obtain from: https://huggingface.co/settings/token |
private bool default: True | Whether the new Space should be private (True) or public (False). Defaults to True. |
hardware Literal[('cpu-basic', 'cpu-upgrade', 't4-small', 't4-medium', 'a10g-small', 'a10g-large', 'a100-large')] | SpaceHardware | None default: None | The hardware tier to use for the new Space. Defaults to the same hardware tier as the original Space. Options include "cpu-basic", "cpu-upgrade", "t4-small", "t4-medium", "a10g-small", "a10g-large", "a100-large", subject to availability. |
secrets dict[str, str] | None default: None | A dictionary of (secret key, secret value) to pass to the new Space. Defaults to None. Secrets are only used when the Space is duplicated for the first time, and are not updated if the duplicated Space already exists. |
sleep_timeout int default: 5 | The number of minutes after which the duplicate Space will be puased if no requests are made to it (to minimize billing charges). Defaults to 5 minutes. |
max_workers int default: 40 | The maximum number of thread workers that can be used to make requests to the remote Gradio app simultaneously. |
verbose bool default: True | Whether the client should print statements to the console. |
gradio.Client.deploy_discord(Ā·Ā·Ā·)
Deploy the upstream app as a discord bot. Currently only supports gr.ChatInterface.
Parameter | Description |
---|---|
discord_bot_token str | None default: None | This is the "password" needed to be able to launch the bot. Users can get a token by creating a bot app on the discord website. If run the method without specifying a token, the space will explain how to get one. See here: https://huggingface.co/spaces/freddyaboulton/test-discord-bot-v1. |
api_names list[str | tuple[str, str]] | None default: None | The api_names of the app to turn into bot commands. This parameter currently has no effect as ChatInterface only has one api_name ('/chat'). |
to_id str | None default: None | The name of the space hosting the discord bot. If None, the name will be gradio-discord-bot- |
hf_token str | None default: None | HF api token with write priviledges in order to upload the files to HF space. Can be ommitted if logged in via the HuggingFace CLI, unless the upstream space is private. Obtain from: https://huggingface.co/settings/token |
private bool default: False | Whether the space hosting the discord bot is private. The visibility of the discord bot itself is set via the discord website. See https://huggingface.co/spaces/freddyaboulton/test-discord-bot-v1 |