Learn when and how to use different W&B APIs in your machine learning workflows.
Query and analyze data logged to W&B.
Automate your W&B workflows.
Train and fine-tune models, manage models from experimentation to production.
This is the multi-page printable view of this section. Click here to print.
Learn when and how to use different W&B APIs in your machine learning workflows.
Query and analyze data logged to W&B.
Automate your W&B workflows.
Train and fine-tune models, manage models from experimentation to production.
Learn when and how to use different W&B APIs to track, share, and manage model artifacts in your machine learning workflows. This page covers logging experiments, generating reports, and accessing logged data using the appropriate W&B API for each task.
W&B offers the following APIs:
wandb.sdk): Log and monitor experiments during training.wandb.apis.public): Query and analyze logged experiment data.wandb.wandb-workspaces): Create reports to summarize findings.To authenticate your machine with W&B, you must first generate an API key at wandb.ai/authorize. Copy the API key and store it securely.
Install the W&B library and some other packages you will need for this walkthrough.
pip install wandb
Import W&B Python SDK:
import wandb
Specify the entity of your team in the following code block:
TEAM_ENTITY = "<Team_Entity>" # Replace with your team entity
PROJECT = "my-awesome-project"
The following code simulates a basic machine learning workflow: training a model, logging metrics, and saving the model as an artifact.
Use the W&B Python SDK (wandb.sdk) to interact with W&B during training. Log the loss using wandb.Run.log(), then save the trained model as an artifact using wandb.Artifact before finally adding the model file using Artifact.add_file.
import random # For simulating data
def model(training_data: int) -> int:
    """Model simulation for demonstration purposes."""
    return training_data * 2 + random.randint(-1, 1)  
# Simulate weights and noise
weights = random.random() # Initialize random weights
noise = random.random() / 5  # Small random noise to simulate noise
# Hyperparameters and configuration
config = {
    "epochs": 10,  # Number of epochs to train
    "learning_rate": 0.01,  # Learning rate for the optimizer
}
# Use context manager to initialize and close W&B runs
with wandb.init(project=PROJECT, entity=TEAM_ENTITY, config=config) as run:    
    # Simulate training loop
    for epoch in range(config["epochs"]):
        xb = weights + noise  # Simulated input training data
        yb = weights + noise * 2  # Simulated target output (double the input noise)
        
        y_pred = model(xb)  # Model prediction
        loss = (yb - y_pred) ** 2  # Mean Squared Error loss
        print(f"epoch={epoch}, loss={y_pred}")
        # Log epoch and loss to W&B
        run.log({
            "epoch": epoch,
            "loss": loss,
        })
    # Unique name for the model artifact,
    model_artifact_name = f"model-demo"  
    # Local path to save the simulated model file
    PATH = "model.txt" 
    # Save model locally
    with open(PATH, "w") as f:
        f.write(str(weights)) # Saving model weights to a file
    # Create an artifact object
    # Add locally saved model to artifact object
    artifact = wandb.Artifact(name=model_artifact_name, type="model", description="My trained model")
    artifact.add_file(local_path=PATH)
    artifact.save()
The key takeaways from the previous code block are:
wandb.Run.log() to log metrics during training.wandb.Artifact to save models (datasets, and so forth) as an artifact to your W&B project.Now that you have trained a model and saved it as an artifact, you can publish it to a registry in W&B. Use wandb.Run.use_artifact() to retrieve the artifact from your project and prepare it for publication in the Model registry. wandb.Run.use_artifact() serves two key purposes:
To share the model with others in your organization, publish it to a collection using wandb.Run.link_artifact(). The following code links the artifact to the core Model registry, making it accessible to your team.
# Artifact name specifies the specific artifact version within our team's project
artifact_name = f'{TEAM_ENTITY}/{PROJECT}/{model_artifact_name}:v0'
print("Artifact name: ", artifact_name)
REGISTRY_NAME = "Model" # Name of the registry in W&B
COLLECTION_NAME = "DemoModels"  # Name of the collection in the registry
# Create a target path for our artifact in the registry
target_path = f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}"
print("Target path: ", target_path)
run = wandb.init(entity=TEAM_ENTITY, project=PROJECT)
model_artifact = run.use_artifact(artifact_or_name=artifact_name, type="model")
run.link_artifact(artifact=model_artifact, target_path=target_path)
run.finish()
After running wandb.Run.link_artifact(), the model artifact will be in the DemoModels collection in your registry. From there, you can view details such as the version history, lineage map, and other metadata.
For additional information on how to link artifacts to a registry, see Link artifacts to a registry.
To use a model for inference, use wandb.Run.use_artifact() to retrieve the published artifact from the registry. This returns an artifact object that you can then use wandb.Artifact.download() to download the artifact to a local file.
REGISTRY_NAME = "Model"  # Name of the registry in W&B
COLLECTION_NAME = "DemoModels"  # Name of the collection in the registry
VERSION = 0 # Version of the artifact to retrieve
model_artifact_name = f"wandb-registry-{REGISTRY_NAME}/{COLLECTION_NAME}:v{VERSION}"
print(f"Model artifact name: {model_artifact_name}")
run = wandb.init(entity=TEAM_ENTITY, project=PROJECT)
registry_model = run.use_artifact(artifact_or_name=model_artifact_name)
local_model_path = registry_model.download()
For more information on how to retrieve artifacts from a registry, see Download an artifact from a registry.
Depending on your machine learning framework, you may need to recreate the model architecture before loading the weights. This is left as an exercise for the reader, as it depends on the specific framework and model you are using.
Create and share a report to summarize your work. To create a report programmatically, use the W&B Report and Workspace API.
First, install the W&B Reports API:
pip install wandb wandb-workspaces -qqq
The following code block creates a report with multiple blocks, including markdown, panel grids, and more. You can customize the report by adding more blocks or changing the content of existing blocks.
The output of the code block prints a link to the URL report created. You can open this link in your browser to view the report.
import wandb_workspaces.reports.v2 as wr
experiment_summary = """This is a summary of the experiment conducted to train a simple model using W&B."""
dataset_info = """The dataset used for training consists of synthetic data generated by a simple model."""
model_info = """The model is a simple linear regression model that predicts output based on input data with some noise."""
report = wr.Report(
    project=PROJECT,
    entity=TEAM_ENTITY,
    title="My Awesome Model Training Report",
    description=experiment_summary,
    blocks= [
        wr.TableOfContents(),
        wr.H2("Experiment Summary"),
        wr.MarkdownBlock(text=experiment_summary),
        wr.H2("Dataset Information"),
        wr.MarkdownBlock(text=dataset_info),
        wr.H2("Model Information"),
        wr.MarkdownBlock(text = model_info),
        wr.PanelGrid(
            panels=[
                wr.LinePlot(title="Train Loss", x="Step", y=["loss"], title_x="Step", title_y="Loss")
                ],
            ),  
    ]
)
# Save the report to W&B
report.save()
For more information on how to create a report programmatically or how to create a report interactively with the W&B App, see Create a report in the W&B Docs Developer guide.
Use the W&B Public APIs to query, analyze, and manage historical data from W&B. This can be useful for tracking the lineage of artifacts, comparing different versions, and analyzing the performance of models over time.
The following code block demonstrates how to query the Model registry for all artifacts in a specific collection. It retrieves the collection and iterates through its versions, printing out the name and version of each artifact.
import wandb
# Initialize wandb API
api = wandb.Api()
# Find all artifact versions that contains the string `model` and 
# has either the tag `text-classification` or an `latest` alias
registry_filters = {
    "name": {"$regex": "model"}
}
# Use logical $or operator to filter artifact versions
version_filters = {
    "$or": [
        {"tag": "text-classification"},
        {"alias": "latest"}
    ]
}
# Returns an iterable of all artifact versions that match the filters
artifacts = api.registries(filter=registry_filters).collections().versions(filter=version_filters)
# Print out the name, collection, aliases, tags, and created_at date of each artifact found
for art in artifacts:
    print(f"artifact name: {art.name}")
    print(f"collection artifact belongs to: { art.collection.name}")
    print(f"artifact aliases: {art.aliases}")
    print(f"tags attached to artifact: {art.tags}")
    print(f"artifact created at: {art.created_at}\n")
For more information on querying the registry, see the Query registry items with MongoDB-style queries.
Train and fine-tune models, manage models from experimentation to production. For guides and examples, see https://docs.wandb.ai.
agentagent(
    sweep_id: str,
    function: Optional[Callable] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None,
    count: Optional[int] = None
) → None
Start one or more sweep agents.
The sweep agent uses the sweep_id to know which sweep it is a part of, what function to execute, and (optionally) how many agents to run.
Args:
sweep_id:  The unique identifier for a sweep. A sweep ID  is generated by W&B CLI or Python SDK.function:  A function to call instead of the “program”  specified in the sweep config.entity:  The username or team name where you want to send W&B  runs created by the sweep to. Ensure that the entity you  specify already exists. If you don’t specify an entity,  the run will be sent to your default entity,  which is usually your username.project:  The name of the project where W&B runs created from  the sweep are sent to. If the project is not specified, the  run is sent to a project labeled “Uncategorized”.count:  The number of sweep config trials to try.controllercontroller(
    sweep_id_or_config: Optional[str, Dict] = None,
    entity: Optional[str] = None,
    project: Optional[str] = None
) → _WandbController
Public sweep controller constructor.
Examples:
import wandb
tuner = wandb.controller(...)
print(tuner.sweep_config)
print(tuner.sweep_id)
tuner.configure_search(...)
tuner.configure_stopping(...)
finishfinish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
exit_code=0) with all data synced.exit_code!=0).Args:
exit_code:  Integer indicating the run’s exit status. Use 0 for success,  any other value marks the run as failed.quiet:  Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).initinit(
    entity: 'str | None' = None,
    project: 'str | None' = None,
    dir: 'StrPath | None' = None,
    id: 'str | None' = None,
    name: 'str | None' = None,
    notes: 'str | None' = None,
    tags: 'Sequence[str] | None' = None,
    config: 'dict[str, Any] | str | None' = None,
    config_exclude_keys: 'list[str] | None' = None,
    config_include_keys: 'list[str] | None' = None,
    allow_val_change: 'bool | None' = None,
    group: 'str | None' = None,
    job_type: 'str | None' = None,
    mode: "Literal['online', 'offline', 'disabled', 'shared'] | None" = None,
    force: 'bool | None' = None,
    anonymous: "Literal['never', 'allow', 'must'] | None" = None,
    reinit: "bool | Literal[None, 'default', 'return_previous', 'finish_previous', 'create_new']" = None,
    resume: "bool | Literal['allow', 'never', 'must', 'auto'] | None" = None,
    resume_from: 'str | None' = None,
    fork_from: 'str | None' = None,
    save_code: 'bool | None' = None,
    tensorboard: 'bool | None' = None,
    sync_tensorboard: 'bool | None' = None,
    monitor_gym: 'bool | None' = None,
    settings: 'Settings | dict[str, Any] | None' = None
) → Run
Start a new run to track and log to W&B.
In an ML training pipeline, you could add wandb.init() to the beginning of your training script as well as your evaluation script, and each piece would be tracked as a run in W&B.
wandb.init() spawns a new background process to log data to a run, and it also syncs data to https://wandb.ai by default, so you can see your results in real-time. When you’re done logging data, call wandb.Run.finish() to end the run. If you don’t call run.finish(), the run will end when your script exits.
Run IDs must not contain any of the following special characters / \ # ? % :
Args:
entity:  The username or team name the runs are logged to.  The entity must already exist, so ensure you create your account  or team in the UI before starting to log runs. If not specified, the  run will default your default entity. To change the default entity,  go to your settings and update the  “Default location to create new projects” under “Default team”.project:  The name of the project under which this run will be logged.  If not specified, we use a heuristic to infer the project name based  on the system, such as checking the git root or the current program  file. If we can’t infer the project name, the project will default to  "uncategorized".dir:  The absolute path to the directory where experiment logs and  metadata files are stored. If not specified, this defaults  to the ./wandb directory. Note that this does not affect the  location where artifacts are stored when calling download().id:  A unique identifier for this run, used for resuming. It must be unique  within the project and cannot be reused once a run is deleted. For  a short descriptive name, use the name field,  or for saving hyperparameters to compare across runs, use config.name:  A short display name for this run, which appears in the UI to help  you identify it. By default, we generate a random two-word name  allowing easy cross-reference runs from table to charts. Keeping these  run names brief enhances readability in chart legends and tables. For  saving hyperparameters, we recommend using the config field.notes:  A detailed description of the run, similar to a commit message in  Git. Use this argument to capture any context or details that may  help you recall the purpose or setup of this run in the future.tags:  A list of tags to label this run in the UI. Tags are helpful for  organizing runs or adding temporary identifiers like “baseline” or  “production.” You can easily add, remove tags, or filter by tags in  the UI.  If resuming a run, the tags provided here will replace any existing  tags. To add tags to a resumed run without overwriting the current  tags, use run.tags += ("new_tag",) after calling run = wandb.init().config:  Sets wandb.config, a dictionary-like object for storing input  parameters to your run, such as model hyperparameters or data  preprocessing settings.  The config appears in the UI in an overview page, allowing you to  group, filter, and sort runs based on these parameters.  Keys should not contain periods (.), and values should be  smaller than 10 MB.  If a dictionary, argparse.Namespace, or absl.flags.FLAGS is  provided, the key-value pairs will be loaded directly into  wandb.config.  If a string is provided, it is interpreted as a path to a YAML file,  from which configuration values will be loaded into wandb.config.config_exclude_keys:  A list of specific keys to exclude from wandb.config.config_include_keys:  A list of specific keys to include in wandb.config.allow_val_change:  Controls whether config values can be modified after their  initial set. By default, an exception is raised if a config value is  overwritten. For tracking variables that change during training, such as  a learning rate, consider using wandb.log() instead. By default, this  is False in scripts and True in Notebook environments.group:  Specify a group name to organize individual runs as part of a larger  experiment. This is useful for cases like cross-validation or running  multiple jobs that train and evaluate a model on different test sets.  Grouping allows you to manage related runs collectively in the UI,  making it easy to toggle and review results as a unified experiment.job_type:  Specify the type of run, especially helpful when organizing runs  within a group as part of a larger experiment. For example, in a group,  you might label runs with job types such as “train” and “eval”.  Defining job types enables you to easily filter and group similar runs  in the UI, facilitating direct comparisons.mode:  Specifies how run data is managed, with the following options:
"online" (default): Enables live syncing with W&B when a network  connection is available, with real-time updates to visualizations."offline": Suitable for air-gapped or offline environments; data  is saved locally and can be synced later. Ensure the run folder  is preserved to enable future syncing."disabled": Disables all W&B functionality, making the run’s methods  no-ops. Typically used in testing to bypass W&B operations."shared": (This is an experimental feature). Allows multiple processes,  possibly on different machines, to simultaneously log to the same run.  In this approach you use a primary node and one or more worker nodes  to log data to the same run. Within the primary node you  initialize a run. For each worker node, initialize a run  using the run ID used by the primary node.force:  Determines if a W&B login is required to run the script. If True,  the user must be logged in to W&B; otherwise, the script will not  proceed. If False (default), the script can proceed without a login,  switching to offline mode if the user is not logged in.anonymous:  Specifies the level of control over anonymous data logging.  Available options are:
"never" (default): Requires you to link your W&B account before  tracking the run. This prevents unintentional creation of anonymous  runs by ensuring each run is associated with an account."allow": Enables a logged-in user to track runs with their account,  but also allows someone running the script without a W&B account  to view the charts and data in the UI."must": Forces the run to be logged to an anonymous account, even  if the user is logged in.reinit:  Shorthand for the “reinit” setting. Determines the behavior of  wandb.init() when a run is active.resume:  Controls the behavior when resuming a run with the specified id.  Available options are:
"allow": If a run with the specified id exists, it will resume  from the last step; otherwise, a new run will be created."never": If a run with the specified id exists, an error will  be raised. If no such run is found, a new run will be created."must": If a run with the specified id exists, it will resume  from the last step. If no run is found, an error will be raised."auto": Automatically resumes the previous run if it crashed on  this machine; otherwise, starts a new run.True: Deprecated. Use "auto" instead.False: Deprecated. Use the default behavior (leaving resume  unset) to always start a new run.  If resume is set, fork_from and resume_from cannot be  used. When resume is unset, the system will always start a new run.resume_from:  Specifies a moment in a previous run to resume a run from,  using the format {run_id}?_step={step}. This allows users to truncate  the history logged to a run at an intermediate step and resume logging  from that step. The target run must be in the same project.  If an id argument is also provided, the resume_from argument will  take precedence.  resume, resume_from and fork_from cannot be used together, only  one of them can be used at a time.  Note that this feature is in beta and may change in the future.fork_from:  Specifies a point in a previous run from which to fork a new  run, using the format {id}?_step={step}. This creates a new run that  resumes logging from the specified step in the target run’s history.  The target run must be part of the current project.  If an id argument is also provided, it must be different from the  fork_from argument, an error will be raised if they are the same.  resume, resume_from and fork_from cannot be used together, only  one of them can be used at a time.  Note that this feature is in beta and may change in the future.save_code:  Enables saving the main script or notebook to W&B, aiding in  experiment reproducibility and allowing code comparisons across runs in  the UI. By default, this is disabled, but you can change the default to  enable on your settings page.tensorboard:  Deprecated. Use sync_tensorboard instead.sync_tensorboard:  Enables automatic syncing of W&B logs from TensorBoard  or TensorBoardX, saving relevant event files for viewing in the W&B UI.saving relevant event files for viewing in the W&B UI. (Default:  False)monitor_gym:  Enables automatic logging of videos of the environment when  using OpenAI Gym.settings:  Specifies a dictionary or wandb.Settings object with advanced  settings for the run.Returns:
A Run object.
Raises:
Error:  If some unknown or internal error happened during the run  initialization.AuthenticationError:  If the user failed to provide valid credentials.CommError:  If there was a problem communicating with the WandB server.UsageError:  If the user provided invalid arguments.KeyboardInterrupt:  If user interrupts the run.Examples:
wandb.init() returns a Run object. Use the run object to log data, save artifacts, and manage the run lifecycle.
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
    # Log accuracy and loss to the run
    acc = 0.95  # Example accuracy
    loss = 0.05  # Example loss
    run.log({"accuracy": acc, "loss": loss})
loginlogin(
    anonymous: Optional[Literal['must', 'allow', 'never']] = None,
    key: Optional[str] = None,
    relogin: Optional[bool] = None,
    host: Optional[str] = None,
    force: Optional[bool] = None,
    timeout: Optional[int] = None,
    verify: bool = False,
    referrer: Optional[str] = None
) → bool
Set up W&B login credentials.
By default, this will only store credentials locally without verifying them with the W&B server. To verify credentials, pass verify=True.
Args:
anonymous:  Set to “must”, “allow”, or “never”.  If set to “must”, always log a user in anonymously. If set to  “allow”, only create an anonymous user if the user  isn’t already logged in. If set to “never”, never log a  user anonymously. Default set to “never”. Defaults to None.key:  The API key to use.relogin:  If true, will re-prompt for API key.host:  The host to connect to.force:  If true, will force a relogin.timeout:  Number of seconds to wait for user input.verify:  Verify the credentials with the W&B server.referrer:  The referrer to use in the URL login request.Returns:
bool:  If key is configured.Raises:
AuthenticationError:  If api_key fails verification with the server.UsageError:  If api_key cannot be configured and no tty.restorerestore(
    name: 'str',
    run_path: 'str | None' = None,
    replace: 'bool' = False,
    root: 'str | None' = None
) → None | TextIO
Download the specified file from cloud storage.
File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.
Args:
name:  The name of the file.run_path:  Optional path to a run to pull files from, i.e. username/project_name/run_id  if wandb.init has not been called, this is required.replace:  Whether to download the file even if it already exists locallyroot:  The directory to download the file to.  Defaults to the current  directory or the run directory if wandb.init was called.Returns: None if it can’t find the file, otherwise a file object open for reading.
Raises:
CommError:  If W&B can’t connect to the W&B backend.ValueError:  If the file is not found or can’t find run_path.setupsetup(settings: 'Settings | None' = None) → _WandbSetup
Prepares W&B for use in the current process and its children.
You can usually ignore this as it is implicitly called by wandb.init().
When using wandb in multiple processes, calling wandb.setup() in the parent process before starting child processes may improve performance and resource utilization.
Note that wandb.setup() modifies os.environ, and it is important that child processes inherit the modified environment variables.
See also wandb.teardown().
Args:
settings:  Configuration settings to apply globally. These can be  overridden by subsequent wandb.init() calls.Example:
import multiprocessing
import wandb
def run_experiment(params):
   with wandb.init(config=params):
        # Run experiment
        pass
if __name__ == "__main__":
   # Start backend and set global config
   wandb.setup(settings={"project": "my_project"})
   # Define experiment parameters
   experiment_params = [
        {"learning_rate": 0.01, "epochs": 10},
        {"learning_rate": 0.001, "epochs": 20},
   ]
   # Start multiple processes, each running a separate experiment
   processes = []
   for params in experiment_params:
        p = multiprocessing.Process(target=run_experiment, args=(params,))
        p.start()
        processes.append(p)
   # Wait for all processes to complete
   for p in processes:
        p.join()
   # Optional: Explicitly shut down the backend
   wandb.teardown()
sweepsweep(
    sweep: Union[dict, Callable],
    entity: Optional[str] = None,
    project: Optional[str] = None,
    prior_runs: Optional[List[str]] = None
) → str
Initialize a hyperparameter sweep.
Search for hyperparameters that optimizes a cost function of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id, that is returned. At a later step provide the sweep_id to a sweep agent.
See Sweep configuration structure for information on how to define your sweep.
Args:
sweep:  The configuration of a hyperparameter search.  (or configuration generator).  If you provide a callable, ensure that the callable does  not take arguments and that it returns a dictionary that  conforms to the W&B sweep config spec.entity:  The username or team name where you want to send W&B  runs created by the sweep to. Ensure that the entity you  specify already exists. If you don’t specify an entity,  the run will be sent to your default entity,  which is usually your username.project:  The name of the project where W&B runs created from  the sweep are sent to. If the project is not specified, the  run is sent to a project labeled ‘Uncategorized’.prior_runs:  The run IDs of existing runs to add to this sweep.Returns:
str:  A unique identifier for the sweep.teardownteardown(exit_code: 'int | None' = None) → None
Waits for W&B to finish and frees resources.
Completes any runs that were not explicitly finished using run.finish() and waits for all data to be uploaded.
It is recommended to call this at the end of a session that used wandb.setup(). It is invoked automatically in an atexit hook, but this is not reliable in certain setups such as when using Python’s multiprocessing module.
ArtifactFlexible and lightweight building block for dataset and model versioning.
Construct an empty W&B Artifact. Populate an artifacts contents with methods that begin with add. Once the artifact has all the desired files, you can call run.log_artifact() to log it.
Args:
name (str):  A human-readable name for the artifact. Use the name to identify  a specific artifact in the W&B App UI or programmatically. You can  interactively reference an artifact with the use_artifact Public API.  A name can contain letters, numbers, underscores, hyphens, and dots.  The name must be unique across a project.type (str):  The artifact’s type. Use the type of an artifact to both organize  and differentiate artifacts. You can use any string that contains letters,  numbers, underscores, hyphens, and dots. Common types include dataset or model.  Include model within your type string if you want to link the artifact  to the W&B Model Registry. Note that some types reserved for internal use  and cannot be set by users. Such types include job and types that start with wandb-.description (str | None) = None:  A description of the artifact. For Model or Dataset Artifacts,  add documentation for your standardized team model or dataset card. View  an artifact’s description programmatically with the Artifact.description  attribute or programmatically with the W&B App UI. W&B renders the  description as markdown in the W&B App.metadata (dict[str, Any] | None) = None:  Additional information about an artifact. Specify metadata as a  dictionary of key-value pairs. You can specify no more than 100 total keys.incremental:  Use Artifact.new_draft() method instead to modify an  existing artifact.use_as:  Deprecated.is_link:  Boolean indication of if the artifact is a linked artifact(True) or source artifact(False).Returns:
An Artifact object.
Artifact.__init____init__(
    name: 'str',
    type: 'str',
    description: 'str | None' = None,
    metadata: 'dict[str, Any] | None' = None,
    incremental: 'bool' = False,
    use_as: 'str | None' = None
) → None
List of one or more semantically-friendly references or
identifying “nicknames” assigned to an artifact version.
Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
The collection this artifact was retrieved from.
A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
The hash returned when this artifact was committed.
Timestamp when the artifact was created.
A description of the artifact.
The logical digest of the artifact.
The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest version, then log_artifact is a no-op.
The name of the entity that the artifact collection belongs to.
If the artifact is a link, the entity will be the entity of the linked artifact.
The number of files (including references).
The nearest step at which history metrics were logged for the source run of the artifact.
Examples:
run = artifact.logged_by()
if run and (artifact.history_step is not None):
    history = run.sample_history(
        min_step=artifact.history_step,
        max_step=artifact.history_step + 1,
        keys=["my_metric"],
    )
The artifact’s ID.
Boolean flag indicating if the artifact is a link artifact.
True: The artifact is a link artifact to a source artifact. False: The artifact is a source artifact.
Returns a list of all the linked artifacts of a source artifact.
If the artifact is a link artifact (artifact.is_link == True), it will return an empty list. Limited to 500 results.
The artifact’s manifest.
The manifest lists all of its contents, and can’t be changed once the artifact has been logged.
User-defined artifact metadata.
Structured data associated with the artifact.
The artifact name and version of the artifact.
A string with the format {collection}:{alias}. If fetched before an artifact is logged/saved, the name won’t contain the alias. If the artifact is a link, the name will be the name of the linked artifact.
The name of the project that the artifact collection belongs to.
If the artifact is a link, the project will be the project of the linked artifact.
The entity/project/name of the artifact.
If the artifact is a link, the qualified name will be the qualified name of the linked artifact path.
The total size of the artifact in bytes.
Includes any references tracked by this artifact.
Returns the source artifact. The source artifact is the original logged artifact.
If the artifact itself is a source artifact (artifact.is_link == False), it will return itself.
The artifact’s source collection.
The source collection is the collection that the artifact was logged from.
The name of the entity of the source artifact.
The artifact name and version of the source artifact.
A string with the format {source_collection}:{alias}. Before the artifact is saved, contains only the name since the version is not yet known.
The name of the project of the source artifact.
The source_entity/source_project/source_name of the source artifact.
The source artifact’s version.
A string with the format v{number}.
The status of the artifact. One of: “PENDING”, “COMMITTED”, or “DELETED”.
List of one or more tags assigned to this artifact version.
The time-to-live (TTL) policy of an artifact.
Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
Raises:
ArtifactNotLoggedError:  Unable to fetch inherited TTL if the artifact has not been logged or saved.The artifact’s type. Common types include dataset or model.
The time when the artifact was last updated.
Constructs the URL of the artifact.
Returns:
str:  The URL of the artifact.Deprecated.
The artifact’s version.
A string with the format v{number}. If the artifact is a link artifact, the version will be from the linked collection.
Artifact.addadd(
    obj: 'WBValue',
    name: 'StrPath',
    overwrite: 'bool' = False
) → ArtifactManifestEntry
Add wandb.WBValue obj to the artifact.
Args:
obj:  The object to add. Currently support one of Bokeh, JoinedTable,  PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D,  Audio, Image, Video, Html, Object3Dname:  The path within the artifact to add the object.overwrite:  If True, overwrite existing objects with the same file  path if applicable.Returns: The added manifest entry
Raises:
ArtifactFinalizedError:  You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.Artifact.add_diradd_dir(
    local_path: 'str',
    name: 'str | None' = None,
    skip_cache: 'bool | None' = False,
    policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
    merge: 'bool' = False
) → None
Add a local directory to the artifact.
Args:
local_path:  The path of the local directory.name:  The subdirectory name within an artifact. The name you  specify appears in the W&B App UI nested by artifact’s type.  Defaults to the root of the artifact.skip_cache:  If set to True, W&B will not copy/move files to  the cache while uploadingpolicy:  By default, “mutable”.
merge:  If False (default), throws ValueError if a file was already added in a previous add_dir call  and its content has changed. If True, overwrites existing files with changed content.  Always adds new files and never removes files. To replace an entire directory, pass a name when adding the directory  using add_dir(local_path, name=my_prefix) and call remove(my_prefix) to remove the directory, then add it again.Raises:
ArtifactFinalizedError:  You cannot make changes to the current  artifact version because it is finalized. Log a new artifact  version instead.ValueError:  Policy must be “mutable” or “immutable”Artifact.add_fileadd_file(
    local_path: 'str',
    name: 'str | None' = None,
    is_tmp: 'bool | None' = False,
    skip_cache: 'bool | None' = False,
    policy: "Literal['mutable', 'immutable'] | None" = 'mutable',
    overwrite: 'bool' = False
) → ArtifactManifestEntry
Add a local file to the artifact.
Args:
local_path:  The path to the file being added.name:  The path within the artifact to use for the file being added.  Defaults to the basename of the file.is_tmp:  If true, then the file is renamed deterministically to avoid  collisions.skip_cache:  If True, do not copy files to the cache  after uploading.policy:  By default, set to “mutable”. If set to “mutable”,  create a temporary copy of the file to prevent corruption  during upload. If set to “immutable”, disable  protection and rely on the user not to delete or change the  file.overwrite:  If True, overwrite the file if it already exists.Returns: The added manifest entry.
Raises:
ArtifactFinalizedError:  You cannot make changes to the current  artifact version because it is finalized. Log a new artifact  version instead.ValueError:  Policy must be “mutable” or “immutable”Artifact.add_referenceadd_reference(
    uri: 'ArtifactManifestEntry | str',
    name: 'StrPath | None' = None,
    checksum: 'bool' = True,
    max_objects: 'int | None' = None
) → Sequence[ArtifactManifestEntry]
Add a reference denoted by a URI to the artifact.
Unlike files or directories that you add to an artifact, references are not uploaded to W&B. For more information, see Track external files.
By default, the following schemes are supported:
Content-Length and the ETag response headers returned by the server.*.blob.core.windows.netFor any other scheme, the digest is just a hash of the URI and the size is left blank.
Args:
uri:  The URI path of the reference to add. The URI path can be an object  returned from Artifact.get_entry to store a reference to another  artifact’s entry.name:  The path within the artifact to place the contents of this reference.checksum:  Whether or not to checksum the resource(s) located at the  reference URI. Checksumming is strongly recommended as it enables  automatic integrity validation. Disabling checksumming will speed up  artifact creation but reference directories will not iterated through so the  objects in the directory will not be saved to the artifact. We recommend  setting checksum=False when adding reference objects, in which case  a new version will only be created if the reference URI changes.max_objects:  The maximum number of objects to consider when adding a  reference that points to directory or bucket store prefix.  By default, the maximum number of objects allowed for Amazon S3,  GCS, Azure, and local files is 10,000,000. Other URI schemas  do not have a maximum.Returns: The added manifest entries.
Raises:
ArtifactFinalizedError:  You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.Artifact.checkoutcheckout(root: 'str | None' = None) → str
Replace the specified root directory with the contents of the artifact.
WARNING: This will delete all files in root that are not included in the artifact.
Args:
root:  The directory to replace with this artifact’s files.Returns: The path of the checked out contents.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.deletedelete(delete_aliases: 'bool' = False) → None
Delete an artifact and its files.
If called on a linked artifact, only the link is deleted, and the source artifact is unaffected.
Use artifact.unlink() instead of artifact.delete() to remove a link between a source artifact and a linked artifact.
Args:
delete_aliases:  If set to True, deletes all aliases associated  with the artifact. Otherwise, this raises an exception if  the artifact has existing aliases. This parameter is ignored  if the artifact is linked (a member of a portfolio collection).Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.downloaddownload(
    root: 'StrPath | None' = None,
    allow_missing_references: 'bool' = False,
    skip_cache: 'bool | None' = None,
    path_prefix: 'StrPath | None' = None,
    multipart: 'bool | None' = None
) → FilePathStr
Download the contents of the artifact to the specified root directory.
Existing files located within root are not modified. Explicitly delete root before you call download if you want the contents of root to exactly match the artifact.
Args:
root:  The directory W&B stores the artifact’s files.allow_missing_references:  If set to True, any invalid reference paths  will be ignored while downloading referenced files.skip_cache:  If set to True, the artifact cache will be skipped when  downloading and W&B will download each file into the default root or  specified download directory.path_prefix:  If specified, only files with a path that starts with the given  prefix will be downloaded. Uses unix format (forward slashes).multipart:  If set to None (default), the artifact will be downloaded  in parallel using multipart download if individual file size is greater than  2GB. If set to True or False, the artifact will be downloaded in  parallel or serially regardless of the file size.Returns: The path to the downloaded contents.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.filefile(root: 'str | None' = None) → StrPath
Download a single file artifact to the directory you specify with root.
Args:
root:  The root directory to store the file. Defaults to  ./artifacts/self.name/.Returns: The full path of the downloaded file.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.ValueError:  If the artifact contains more than one file.Artifact.filesfiles(names: 'list[str] | None' = None, per_page: 'int' = 50) → ArtifactFiles
Iterate over all files stored in this artifact.
Args:
names:  The filename paths relative to the root of the artifact you wish to  list.per_page:  The number of files to return per request.Returns:
An iterator containing File objects.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.finalizefinalize() → None
Finalize the artifact version.
You cannot modify an artifact version once it is finalized because the artifact is logged as a specific artifact version. Create a new artifact version to log more data to an artifact. An artifact is automatically finalized when you log the artifact with log_artifact.
Artifact.getget(name: 'str') → WBValue | None
Get the WBValue object located at the artifact relative name.
Args:
name:  The artifact relative name to retrieve.Returns:
W&B object that can be logged with run.log() and visualized in the W&B UI.
Raises:
ArtifactNotLoggedError:  if the artifact isn’t logged or the  run is offline.Artifact.get_added_local_path_nameget_added_local_path_name(local_path: 'str') → str | None
Get the artifact relative name of a file added by a local filesystem path.
Args:
local_path:  The local path to resolve into an artifact relative name.Returns: The artifact relative name.
Artifact.get_entryget_entry(name: 'StrPath') → ArtifactManifestEntry
Get the entry with the given name.
Args:
name:  The artifact relative name to getReturns:
A W&B object.
Raises:
ArtifactNotLoggedError:  if the artifact isn’t logged or the run is offline.KeyError:  if the artifact doesn’t contain an entry with the given name.Artifact.get_pathget_path(name: 'StrPath') → ArtifactManifestEntry
Deprecated. Use get_entry(name).
Artifact.is_draftis_draft() → bool
Check if artifact is not saved.
Returns:
Boolean. False if artifact is saved. True if artifact is not saved.
Artifact.json_encodejson_encode() → dict[str, Any]
Returns the artifact encoded to the JSON format.
Returns:
A dict with string keys representing attributes of the artifact.
Artifact.linklink(target_path: 'str', aliases: 'list[str] | None' = None) → Artifact
Link this artifact to a portfolio (a promoted collection of artifacts).
Args:
target_path:  The path to the portfolio inside a project.  The target path must adhere to one of the following  schemas {portfolio}, {project}/{portfolio} or  {entity}/{project}/{portfolio}.  To link the artifact to the Model Registry, rather than to a generic  portfolio inside a project, set target_path to the following  schema {"model-registry"}/{Registered Model Name} or  {entity}/{"model-registry"}/{Registered Model Name}.aliases:  A list of strings that uniquely identifies the artifact  inside the specified portfolio.Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Returns: The linked artifact.
Artifact.logged_bylogged_by() → Run | None
Get the W&B run that originally logged the artifact.
Returns: The name of the W&B run that originally logged the artifact.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.new_draftnew_draft() → Artifact
Create a new draft artifact with the same content as this committed artifact.
Modifying an existing artifact creates a new artifact version known as an “incremental artifact”. The artifact returned can be extended or modified and logged as a new version.
Returns:
An Artifact object.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.new_filenew_file(
    name: 'str',
    mode: 'str' = 'x',
    encoding: 'str | None' = None
) → Iterator[IO]
Open a new temporary file and add it to the artifact.
Args:
name:  The name of the new file to add to the artifact.mode:  The file access mode to use to open the new file.encoding:  The encoding used to open the new file.Returns: A new file object that can be written to. Upon closing, the file is automatically added to the artifact.
Raises:
ArtifactFinalizedError:  You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.Artifact.removeremove(item: 'StrPath | ArtifactManifestEntry') → None
Remove an item from the artifact.
Args:
item:  The item to remove. Can be a specific manifest entry  or the name of an artifact-relative path. If the item  matches a directory all items in that directory will be removed.Raises:
ArtifactFinalizedError:  You cannot make changes to the current  artifact version because it is finalized. Log a new artifact  version instead.FileNotFoundError:  If the item isn’t found in the artifact.Artifact.savesave(
    project: 'str | None' = None,
    settings: 'wandb.Settings | None' = None
) → None
Persist any changes made to the artifact.
If currently in a run, that run will log this artifact. If not currently in a run, a run of type “auto” is created to track this artifact.
Args:
project:  A project to use for the artifact in the case that a run is not  already in context.settings:  A settings object to use when initializing an automatic run. Most  commonly used in testing harness.Artifact.unlinkunlink() → None
Unlink this artifact if it is currently a member of a promoted collection of artifacts.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.ValueError:  If the artifact is not linked, in other words, it is not a member of a portfolio collection.Artifact.used_byused_by() → list[Run]
Get a list of the runs that have used this artifact and its linked artifacts.
Returns:
A list of Run objects.
Raises:
ArtifactNotLoggedError:  If the artifact is not logged.Artifact.verifyverify(root: 'str | None' = None) → None
Verify that the contents of an artifact match the manifest.
All files in the directory are checksummed and the checksums are then cross-referenced against the artifact’s manifest. References are not verified.
Args:
root:  The directory to verify. If None artifact will be downloaded to  ‘./artifacts/self.name/’.Raises:
ArtifactNotLoggedError:  If the artifact is not logged.ValueError:  If the verification fails.Artifact.waitwait(timeout: 'int | None' = None) → Artifact
If needed, wait for this artifact to finish logging.
Args:
timeout:  The time, in seconds, to wait.Returns:
An Artifact object.
RunA unit of computation logged by W&B. Typically, this is an ML experiment.
Call wandb.init() to create a new run. wandb.init() starts a new run and returns a wandb.Run object. Each run is associated with a unique ID (run ID). W&B recommends using a context (with statement) manager to automatically finish the run.
For distributed training experiments, you can either track each process separately using one run per process or track all processes to a single run. See Log distributed training experiments for more information.
You can log data to a run with wandb.Run.log(). Anything you log using wandb.Run.log() is sent to that run. See Create an experiment or wandb.init API reference page or more information.
There is a another Run object in the wandb.apis.public namespace. Use this object is to interact with runs that have already been created.
Attributes:
summary:  (Summary) A summary of the run, which is a dictionary-like  object. For more information, see[Log summary metrics](https: //docs.wandb.ai/guides/track/log/log-summary/).Examples:
Create a run with wandb.init():
import wandb
# Start a new run and log some data
# Use context manager (`with` statement) to automatically finish the run
with wandb.init(entity="entity", project="project") as run:
    run.log({"accuracy": acc, "loss": loss})
Config object associated with this run.
Static config object associated with this run.
The directory where files associated with the run are saved.
True if the run is disabled, False otherwise.
The name of the W&B entity associated with the run.
Entity can be a username or the name of a team or organization.
Returns the name of the group associated with this run.
Grouping runs together allows related experiments to be organized and visualized collectively in the W&B UI. This is especially useful for scenarios such as distributed training or cross-validation, where multiple runs should be viewed and managed as a unified experiment.
In shared mode, where all processes share the same run object, setting a group is usually unnecessary, since there is only one run and no grouping is required.
Identifier for this run.
Name of the job type associated with the run.
View a run’s job type in the run’s Overview page in the W&B App.
You can use this to categorize runs by their job type, such as “training”, “evaluation”, or “inference”. This is useful for organizing and filtering runs in the W&B UI, especially when you have multiple runs with different job types in the same project. For more information, see Organize runs.
Display name of the run.
Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated.
Notes associated with the run, if there are any.
Notes can be a multiline string and can also use markdown and latex equations inside $$, like $x + 3$.
True if the run is offline, False otherwise.
Path to the run.
Run paths include entity, project, and run ID, in the format entity/project/run_id.
Name of the W&B project associated with the run.
URL of the W&B project associated with the run, if there is one.
Offline runs do not have a project URL.
True if the run was resumed, False otherwise.
A frozen copy of run’s Settings object.
Unix timestamp (in seconds) of when the run started.
Identifier for the sweep associated with the run, if there is one.
URL of the sweep associated with the run, if there is one.
Offline runs do not have a sweep URL.
Tags associated with the run, if there are any.
The url for the W&B run, if there is one.
Offline runs will not have a url.
Run.alertalert(
    title: 'str',
    text: 'str',
    level: 'str | AlertLevel | None' = None,
    wait_duration: 'int | float | timedelta | None' = None
) → None
Create an alert with the given title and text.
Args:
title:  The title of the alert, must be less than 64 characters long.text:  The text body of the alert.level:  The alert level to use, either: INFO, WARN, or ERROR.wait_duration:  The time to wait (in seconds) before sending another  alert with this title.Run.define_metricdefine_metric(
    name: 'str',
    step_metric: 'str | wandb_metric.Metric | None' = None,
    step_sync: 'bool | None' = None,
    hidden: 'bool | None' = None,
    summary: 'str | None' = None,
    goal: 'str | None' = None,
    overwrite: 'bool | None' = None
) → wandb_metric.Metric
Customize metrics logged with wandb.Run.log().
Args:
name:  The name of the metric to customize.step_metric:  The name of another metric to serve as the X-axis  for this metric in automatically generated charts.step_sync:  Automatically insert the last value of step_metric into  wandb.Run.log() if it is not provided explicitly. Defaults to True  if step_metric is specified.hidden:  Hide this metric from automatic plots.summary:  Specify aggregate metrics added to summary.  Supported aggregations include “min”, “max”, “mean”, “last”,  “first”, “best”, “copy” and “none”. “none” prevents a summary  from being generated. “best” is used together with the goal  parameter, “best” is deprecated and should not be used, use  “min” or “max” instead. “copy” is deprecated and should not be  used.goal:  Specify how to interpret the “best” summary type.  Supported options are “minimize” and “maximize”. “goal” is  deprecated and should not be used, use “min” or “max” instead.overwrite:  If false, then this call is merged with previous  define_metric calls for the same metric by using their  values for any unspecified parameters. If true, then  unspecified parameters overwrite values specified by  previous calls.Returns: An object that represents this call but can otherwise be discarded.
Run.displaydisplay(height: 'int' = 420, hidden: 'bool' = False) → bool
Display this run in Jupyter.
Run.finishfinish(exit_code: 'int | None' = None, quiet: 'bool | None' = None) → None
Finish a run and upload any remaining data.
Marks the completion of a W&B run and ensures all data is synced to the server. The run’s final state is determined by its exit conditions and sync status.
Run States:
exit_code=0) with all data synced.exit_code!=0).Args:
exit_code:  Integer indicating the run’s exit status. Use 0 for success,  any other value marks the run as failed.quiet:  Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).Run.finish_artifactfinish_artifact(
    artifact_or_path: 'Artifact | str',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    distributed_id: 'str | None' = None
) → Artifact
Finishes a non-finalized artifact as output of a run.
Subsequent “upserts” with the same distributed ID will result in a new version.
Args:
artifact_or_path:  A path to the contents of this artifact,  can be in the following forms:
- /local/directory
- /local/directory/file.txt
- s3://bucket/path  You can also pass an Artifact object created by calling  wandb.Artifact.name:  An artifact name. May be prefixed with entity/project.  Valid names can be in the following forms:
- name:version
- name:alias
- digest  This will default to the basename of the path prepended with the current  run id  if not specified.type:  The type of artifact to log, examples include dataset, modelaliases:  Aliases to apply to this artifact,  defaults to ["latest"]distributed_id:  Unique string that all distributed jobs share. If None,  defaults to the run’s group name.Returns:
An Artifact object.
Run.link_artifactlink_artifact(
    artifact: 'Artifact',
    target_path: 'str',
    aliases: 'list[str] | None' = None
) → Artifact
Link the given artifact to a portfolio (a promoted collection of artifacts).
Linked artifacts are visible in the UI for the specified portfolio.
Args:
artifact:  the (public or local) artifact which will be linkedtarget_path:  str - takes the following forms: {portfolio}, {project}/{portfolio},  or {entity}/{project}/{portfolio}aliases:  List[str] - optional alias(es) that will only be applied on this linked artifact  inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked.Returns: The linked artifact.
Run.link_modellink_model(
    path: 'StrPath',
    registered_model_name: 'str',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
) → Artifact | None
Log a model artifact version and link it to a registered model in the model registry.
Linked model versions are visible in the UI for the specified registered model.
This method will:
Args:
path:  (str) A path to the contents of this model, can be in the  following forms:
/local/directory/local/directory/file.txts3://bucket/pathregistered_model_name:  The name of the registered model that the  model is to be linked to. A registered model is a collection of  model versions linked to the model registry, typically  representing a team’s specific ML Task. The entity that this  registered model belongs to will be derived from the run.name:  The name of the model artifact that files in ‘path’ will be  logged to. This will default to the basename of the path  prepended with the current run id  if not specified.aliases:  Aliases that will only be applied on this linked artifact  inside the registered model. The alias “latest” will always be  applied to the latest version of an artifact that is linked.Raises:
AssertionError:  If registered_model_name is a path or  if model artifact ’name’ is of a type that does not contain  the substring ‘model’.ValueError:  If name has invalid special characters.Returns:
The linked artifact if linking was successful, otherwise None.
Run.loglog(
    data: 'dict[str, Any]',
    step: 'int | None' = None,
    commit: 'bool | None' = None
) → None
Upload run data.
Use log to log data from runs, such as scalars, images, video, histograms, plots, and tables. See Log objects and media for code snippets, best practices, and more.
Basic usage:
import wandb
with wandb.init() as run:
     run.log({"train-loss": 0.5, "accuracy": 0.9})
The previous code snippet saves the loss and accuracy to the run’s history and updates the summary values for these metrics.
Visualize logged data in a workspace at wandb.ai, or locally on a self-hosted instance of the W&B app, or export data to visualize and explore locally, such as in a Jupyter notebook, with the Public API.
Logged values don’t have to be scalars. You can log any W&B supported Data Type such as images, audio, video, and more. For example, you can use wandb.Table to log structured data. See Log tables, visualize and query data tutorial for more details.
W&B organizes metrics with a forward slash (/) in their name into sections named using the text before the final slash. For example, the following results in two sections named “train” and “validate”:
with wandb.init() as run:
     # Log metrics in the "train" section.
     run.log(
         {
             "train/accuracy": 0.9,
             "train/loss": 30,
             "validate/accuracy": 0.8,
             "validate/loss": 20,
         }
     )
Only one level of nesting is supported; run.log({"a/b/c": 1}) produces a section named “a/b”.
run.log() is not intended to be called more than a few times per second. For optimal performance, limit your logging to once every N iterations, or collect data over multiple iterations and log it in a single step.
By default, each call to log creates a new “step”. The step must always increase, and it is not possible to log to a previous step. You can use any metric as the X axis in charts. See Custom log axes for more details.
In many cases, it is better to treat the W&B step like you’d treat a timestamp rather than a training step.
with wandb.init() as run:
     # Example: log an "epoch" metric for use as an X axis.
     run.log({"epoch": 40, "train-loss": 0.5})
It is possible to use multiple wandb.Run.log() invocations to log to the same step with the step and commit parameters. The following are all equivalent:
with wandb.init() as run:
     # Normal usage:
     run.log({"train-loss": 0.5, "accuracy": 0.8})
     run.log({"train-loss": 0.4, "accuracy": 0.9})
     # Implicit step without auto-incrementing:
     run.log({"train-loss": 0.5}, commit=False)
     run.log({"accuracy": 0.8})
     run.log({"train-loss": 0.4}, commit=False)
     run.log({"accuracy": 0.9})
     # Explicit step:
     run.log({"train-loss": 0.5}, step=current_step)
     run.log({"accuracy": 0.8}, step=current_step)
     current_step += 1
     run.log({"train-loss": 0.4}, step=current_step)
     run.log({"accuracy": 0.9}, step=current_step)
Args:
data:  A dict with str keys and values that are serializablePython objects including:  int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.step:  The step number to log. If None, then an implicit  auto-incrementing step is used. See the notes in  the description.commit:  If true, finalize and upload the step. If false, then  accumulate data for the step. See the notes in the description.  If step is None, then the default is commit=True;  otherwise, the default is commit=False.Examples: For more and more detailed examples, see our guides to logging.
Basic usage
import wandb
with wandb.init() as run:
    run.log({"train-loss": 0.5, "accuracy": 0.9
Incremental logging
import wandb
with wandb.init() as run:
    run.log({"loss": 0.2}, commit=False)
    # Somewhere else when I'm ready to report this step:
    run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distribution
gradients = np.random.randn(100, 100)
with wandb.init() as run:
    run.log({"gradients": wandb.Histogram(gradients)})
Image from NumPy
import numpy as np
import wandb
with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(pixels, caption=f"random field {i}")
         examples.append(image)
    run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(
             low=0,
             high=256,
             size=(100, 100, 3),
             dtype=np.uint8,
         )
         pil_image = PILImage.fromarray(pixels, mode="RGB")
         image = wandb.Image(pil_image, caption=f"random field {i}")
         examples.append(image)
    run.log({"examples": examples})
Video from NumPy
import numpy as np
import wandb
with wandb.init() as run:
    # axes are (time, channel, height, width)
    frames = np.random.randint(
         low=0,
         high=256,
         size=(10, 3, 100, 100),
         dtype=np.uint8,
    )
    run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
with wandb.init() as run:
    fig, ax = plt.subplots()
    x = np.linspace(0, 10)
    y = x * x
    ax.plot(x, y)  # plot y = x^2
    run.log({"chart": fig})
PR Curve
import wandb
with wandb.init() as run:
    run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
3D Object
import wandb
with wandb.init() as run:
    run.log(
         {
             "generated_samples": [
                 wandb.Object3D(open("sample.obj")),
                 wandb.Object3D(open("sample.gltf")),
                 wandb.Object3D(open("sample.glb")),
             ]
         }
    )
Raises:
wandb.Error:  If called before wandb.init().ValueError:  If invalid data is passed.Run.log_artifactlog_artifact(
    artifact_or_path: 'Artifact | StrPath',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    tags: 'list[str] | None' = None
) → Artifact
Declare an artifact as an output of a run.
Args:
artifact_or_path:  (str or Artifact) A path to the contents of this artifact,  can be in the following forms:
- /local/directory
- /local/directory/file.txt
- s3://bucket/path  You can also pass an Artifact object created by calling  wandb.Artifact.name:  (str, optional) An artifact name. Valid names can be in the following forms:
- name:version
- name:alias
- digest  This will default to the basename of the path prepended with the current  run id  if not specified.type:  (str) The type of artifact to log, examples include dataset, modelaliases:  (list, optional) Aliases to apply to this artifact,  defaults to ["latest"]tags:  (list, optional) Tags to apply to this artifact, if any.Returns:
An Artifact object.
Run.log_codelog_code(
    root: 'str | None' = '.',
    name: 'str | None' = None,
    include_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function _is_py_requirements_or_dockerfile at 0x10342a8c0>,
    exclude_fn: 'Callable[[str, str], bool] | Callable[[str], bool]' = <function exclude_wandb_fn at 0x1050f4ee0>
) → Artifact | None
Save the current state of your code to a W&B Artifact.
By default, it walks the current directory and logs all files that end with .py.
Args:
root:  The relative (to os.getcwd()) or absolute path to recursively find code from.name:  (str, optional) The name of our code artifact. By default, we’ll name  the artifact source-$PROJECT_ID-$ENTRYPOINT_RELPATH. There may be scenarios where you want  many runs to share the same artifact. Specifying name allows you to achieve that.include_fn:  A callable that accepts a file path and (optionally) root path and  returns True when it should be included and False otherwise. Thisdefaults to lambda path, root:  path.endswith(".py").exclude_fn:  A callable that accepts a file path and (optionally) root path and  returns True when it should be excluded and False otherwise. This  defaults to a function that excludes all files within <root>/.wandb/  and <root>/wandb/ directories.Examples: Basic usage
import wandb
with wandb.init() as run:
    run.log_code()
Advanced usage
import wandb
with wandb.init() as run:
    run.log_code(
         root="../",
         include_fn=lambda path: path.endswith(".py") or path.endswith(".ipynb"),
         exclude_fn=lambda path, root: os.path.relpath(path, root).startswith(
             "cache/"
         ),
    )
Returns:
An Artifact object if code was logged
Run.log_modellog_model(
    path: 'StrPath',
    name: 'str | None' = None,
    aliases: 'list[str] | None' = None
) → None
Logs a model artifact containing the contents inside the ‘path’ to a run and marks it as an output to this run.
The name of model artifact can only contain alphanumeric characters, underscores, and hyphens.
Args:
path:  (str) A path to the contents of this model,  can be in the following forms:
- /local/directory
- /local/directory/file.txt
- s3://bucket/pathname:  A name to assign to the model artifact that  the file contents will be added to. This will default to the  basename of the path prepended with the current run id if  not specified.aliases:  Aliases to apply to the created model artifact,  defaults to ["latest"]Raises:
ValueError:  If name has invalid special characters.Returns: None
Run.mark_preemptingmark_preempting() → None
Mark this run as preempting.
Also tells the internal process to immediately report this to server.
Run.restorerestore(
    name: 'str',
    run_path: 'str | None' = None,
    replace: 'bool' = False,
    root: 'str | None' = None
) → None | TextIO
Download the specified file from cloud storage.
File is placed into the current directory or run directory. By default, will only download the file if it doesn’t already exist.
Args:
name:  The name of the file.run_path:  Optional path to a run to pull files from, i.e. username/project_name/run_id  if wandb.init has not been called, this is required.replace:  Whether to download the file even if it already exists locallyroot:  The directory to download the file to.  Defaults to the current  directory or the run directory if wandb.init was called.Returns: None if it can’t find the file, otherwise a file object open for reading.
Raises:
CommError:  If W&B can’t connect to the W&B backend.ValueError:  If the file is not found or can’t find run_path.Run.savesave(
    glob_str: 'str | os.PathLike',
    base_path: 'str | os.PathLike | None' = None,
    policy: 'PolicyName' = 'live'
) → bool | list[str]
Sync one or more files to W&B.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save is called regardless of the policy. In particular, new files are not picked up automatically.
A base_path may be provided to control the directory structure of uploaded files. It should be a prefix of glob_str, and the directory structure beneath it is preserved.
When given an absolute path or glob and no base_path, one directory level is preserved as in the example above.
Args:
glob_str:  A relative or absolute path or Unix glob.base_path:  A path to use to infer a directory structure; see examples.policy:  One of live, now, or end.
Returns: Paths to the symlinks created for the matched files.
For historical reasons, this may return a boolean in legacy code.
import wandb
run = wandb.init()
run.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
run.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
run.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
run.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
run.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
#    of "files/".
# Explicitly finish the run since a context manager is not used.
run.finish()
Run.statusstatus() → RunStatus
Get sync info from the internal backend, about the current run’s sync status.
Run.unwatchunwatch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module] | None' = None
) → None
Remove pytorch model topology, gradient and parameter hooks.
Args:
models:  Optional list of pytorch models that have had watch called on them.Run.upsert_artifactupsert_artifact(
    artifact_or_path: 'Artifact | str',
    name: 'str | None' = None,
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    distributed_id: 'str | None' = None
) → Artifact
Declare (or append to) a non-finalized artifact as output of a run.
Note that you must call run.finish_artifact() to finalize the artifact. This is useful when distributed jobs need to all contribute to the same artifact.
Args:
artifact_or_path:  A path to the contents of this artifact,  can be in the following forms:
/local/directory/local/directory/file.txts3://bucket/pathname:  An artifact name. May be prefixed with “entity/project”. Defaults  to the basename of the path prepended with the current run ID  if not specified. Valid names can be in the following forms:
type:  The type of artifact to log. Common examples include dataset, model.aliases:  Aliases to apply to this artifact, defaults to ["latest"].distributed_id:  Unique string that all distributed jobs share. If None,  defaults to the run’s group name.Returns:
An Artifact object.
Run.use_artifactuse_artifact(
    artifact_or_name: 'str | Artifact',
    type: 'str | None' = None,
    aliases: 'list[str] | None' = None,
    use_as: 'str | None' = None
) → Artifact
Declare an artifact as an input to a run.
Call download or file on the returned object to get the contents locally.
Args:
artifact_or_name:  The name of the artifact to use. May be prefixed  with the name of the project the artifact was logged to  ("type:  The type of artifact to use.aliases:  Aliases to apply to this artifactuse_as:  This argument is deprecated and does nothing.Returns:
An Artifact object.
Examples:
import wandb
run = wandb.init(project="<example>")
# Use an artifact by name and alias
artifact_a = run.use_artifact(artifact_or_name="<name>:<alias>")
# Use an artifact by name and version
artifact_b = run.use_artifact(artifact_or_name="<name>:v<version>")
# Use an artifact by entity/project/name:alias
artifact_c = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:<alias>"
)
# Use an artifact by entity/project/name:version
artifact_d = run.use_artifact(
   artifact_or_name="<entity>/<project>/<name>:v<version>"
)
# Explicitly finish the run since a context manager is not used.
run.finish()
Run.use_modeluse_model(name: 'str') → FilePathStr
Download the files logged in a model artifact ’name’.
Args:
name:  A model artifact name. ’name’ must match the name of an existing logged  model artifact. May be prefixed with entity/project/. Valid names  can be in the following forms
Returns:
path (str):  Path to downloaded model artifact file(s).Raises:
AssertionError:  If model artifact ’name’ is of a type that does  not contain the substring ‘model’.Run.watchwatch(
    models: 'torch.nn.Module | Sequence[torch.nn.Module]',
    criterion: 'torch.F | None' = None,
    log: "Literal['gradients', 'parameters', 'all'] | None" = 'gradients',
    log_freq: 'int' = 1000,
    idx: 'int | None' = None,
    log_graph: 'bool' = False
) → None
Hook into given PyTorch model to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training.
Args:
models:  A single model or a sequence of models to be monitored.criterion:  The loss function being optimized (optional).log:  Specifies whether to log “gradients”, “parameters”, or “all”.  Set to None to disable logging. (default=“gradients”).log_freq:  Frequency (in batches) to log gradients and parameters. (default=1000)idx:  Index used when tracking multiple models with wandb.watch. (default=None)log_graph:  Whether to log the model’s computational graph. (default=False)Raises:
ValueError:  If wandb.init() has not been called or if any of the models are not instances  of torch.nn.Module.
Settings for the W&B SDK.
This class manages configuration settings for the W&B SDK,
ensuring type safety and validation of all settings. Settings are accessible
as attributes and can be initialized programmatically, through environment
variables (WANDB_ prefix), and with configuration files.
The settings are organized into three categories:
Attributes:
allow_offline_artifacts (bool): Flag to allow table artifacts to be synced in offline mode. To revert to the old behavior, set this to False.
allow_val_change (bool): Flag to allow modification of Config values after they’ve been set.
anonymous (Optional): Controls anonymous data logging. Possible values are:
api_key (Optional): The W&B API key.
azure_account_url_to_access_key (Optional): Mapping of Azure account URLs to their corresponding access keys for Azure integration.
base_url (str): The URL of the W&B backend for data synchronization.
code_dir (Optional): Directory containing the code to be tracked by W&B.
config_paths (Optional): Paths to files to load configuration from into the Config object.
console (Literal): The type of console capture to be applied.
Possible values are:
“auto” - Automatically selects the console capture method based on the
system environment and settings.
“off” - Disables console capture.
“redirect” - Redirects low-level file descriptors for capturing output.
“wrap” - Overrides the write methods of sys.stdout/sys.stderr. Will be
mapped to either “wrap_raw” or “wrap_emu” based on the state of the system.
“wrap_raw” - Same as “wrap” but captures raw output directly instead of
through an emulator. Derived from the wrap setting and should not be set manually.
“wrap_emu” - Same as “wrap” but captures output through an emulator.
Derived from the wrap setting and should not be set manually.
console_multipart (bool): Whether to produce multipart console log files.
credentials_file (str): Path to file for writing temporary access tokens.
disable_code (bool): Whether to disable capturing the code.
disable_git (bool): Whether to disable capturing the git state.
disable_job_creation (bool): Whether to disable the creation of a job artifact for W&B Launch.
docker (Optional): The Docker image used to execute the script.
email (Optional): The email address of the user.
entity (Optional): The W&B entity, such as a user or a team.
force (bool): Whether to pass the force flag to wandb.login().
fork_from (Optional): Specifies a point in a previous execution of a run to fork from. The point is defined by the run ID, a metric, and its value. Currently, only the metric ‘_step’ is supported.
git_commit (Optional): The git commit hash to associate with the run.
git_remote (str): The git remote to associate with the run.
git_remote_url (Optional): The URL of the git remote repository.
git_root (Optional): Root directory of the git repository.
host (Optional): Hostname of the machine running the script.
http_proxy (Optional): Custom proxy servers for http requests to W&B.
https_proxy (Optional): Custom proxy servers for https requests to W&B.
identity_token_file (Optional): Path to file containing an identity token (JWT) for authentication.
ignore_globs (Sequence): Unix glob patterns relative to files_dir specifying files to exclude from upload.
init_timeout (float): Time in seconds to wait for the wandb.init call to complete before timing out.
insecure_disable_ssl (bool): Whether to insecurely disable SSL verification.
job_name (Optional): Name of the Launch job running the script.
job_source (Optional): Source type for Launch.
label_disable (bool): Whether to disable automatic labeling features.
launch_config_path (Optional): Path to the launch configuration file.
login_timeout (Optional): Time in seconds to wait for login operations before timing out.
max_end_of_run_history_metrics (int): Maximum number of history sparklines to display at the end of a run.
max_end_of_run_summary_metrics (int): Maximum number of summary metrics to display at the end of a run.
mode (Literal): The operating mode for W&B logging and synchronization.
notebook_name (Optional): Name of the notebook if running in a Jupyter-like environment.
organization (Optional): The W&B organization.
program (Optional): Path to the script that created the run, if available.
program_abspath (Optional): The absolute path from the root repository directory to the script that created the run. Root repository directory is defined as the directory containing the .git directory, if it exists. Otherwise, it’s the current working directory.
program_relpath (Optional): The relative path to the script that created the run.
project (Optional): The W&B project ID.
quiet (bool): Flag to suppress non-essential output.
reinit (Union): What to do when wandb.init() is called while a run is active.
Options:
wandb.run; see
the “create_new” option.wandb.run and top-level functions like wandb.log.
Because of this, some older integrations that rely on the global run
will not work.
Can also be a boolean, but this is deprecated. False is the same as
“return_previous”, and True is the same as “finish_previous”.relogin (bool): Flag to force a new login attempt.
resume (Optional): Specifies the resume behavior for the run. Options:
resume_from (Optional): Specifies a point in a previous execution of a run to resume from. The point is defined by the run ID, a metric, and its value. Currently, only the metric ‘_step’ is supported.
root_dir (str): The root directory to use as the base for all run-related paths. In particular, this is used to derive the wandb directory and the run directory.
run_group (Optional): Group identifier for related runs. Used for grouping runs in the UI.
run_id (Optional): The ID of the run.
run_job_type (Optional): Type of job being run (e.g., training, evaluation).
run_name (Optional): Human-readable name for the run.
run_notes (Optional): Additional notes or description for the run.
run_tags (Optional): Tags to associate with the run for organization and filtering.
sagemaker_disable (bool): Flag to disable SageMaker-specific functionality.
save_code (Optional): Whether to save the code associated with the run.
settings_system (Optional): Path to the system-wide settings file.
show_errors (bool): Whether to display error messages.
show_info (bool): Whether to display informational messages.
show_warnings (bool): Whether to display warning messages.
silent (bool): Flag to suppress all output.
strict (Optional): Whether to enable strict mode for validation and error checking.
summary_timeout (int): Time in seconds to wait for summary operations before timing out.
sweep_id (Optional): Identifier of the sweep this run belongs to.
sweep_param_path (Optional): Path to the sweep parameters configuration.
symlink (bool): Whether to use symlinks (True by default except on Windows).
sync_tensorboard (Optional): Whether to synchronize TensorBoard logs with W&B.
table_raise_on_max_row_limit_exceeded (bool): Whether to raise an exception when table row limits are exceeded.
username (Optional): Username.
x_disable_meta (bool): Flag to disable the collection of system metadata.
x_disable_stats (bool): Flag to disable the collection of system metrics.
x_extra_http_headers (Optional): Additional headers to add to all outgoing HTTP requests.
x_label (Optional): Label to assign to system metrics and console logs collected for the run. This is used to group data by on the frontend and can be used to distinguish data from different processes in a distributed training job.
x_primary (bool): Determines whether to save internal wandb files and metadata. In a distributed setting, this is useful for avoiding file overwrites from secondary processes when only system metrics and logs are needed, as the primary process handles the main logging.
x_save_requirements (bool): Flag to save the requirements file.
x_server_side_derived_summary (bool): Flag to delegate automatic computation of summary from history to the server. This does not disable user-provided summary updates.
x_service_wait (float): Time in seconds to wait for the wandb-core internal service to start.
x_skip_transaction_log (bool): Whether to skip saving the run events to the transaction log. This is only relevant for online runs. Can be used to reduce the amount of data written to disk. Should be used with caution, as it removes the gurantees about recoverability.
x_stats_cpu_count (Optional): System CPU count. If set, overrides the auto-detected value in the run metadata.
x_stats_cpu_logical_count (Optional): Logical CPU count. If set, overrides the auto-detected value in the run metadata.
x_stats_disk_paths (Optional): System paths to monitor for disk usage.
x_stats_gpu_count (Optional): GPU device count. If set, overrides the auto-detected value in the run metadata.
x_stats_gpu_device_ids (Optional): GPU device indices to monitor. If not set, the system monitor captures metrics for all GPUs. Assumes 0-based indexing matching CUDA/ROCm device enumeration.
x_stats_gpu_type (Optional): GPU device type. If set, overrides the auto-detected value in the run metadata.
x_stats_open_metrics_endpoints (Optional): OpenMetrics /metrics endpoints to monitor for system metrics.
x_stats_open_metrics_filters (Union): Filter to apply to metrics collected from OpenMetrics /metrics endpoints.
Supports two formats:
x_stats_open_metrics_http_headers (Optional): HTTP headers to add to OpenMetrics requests.
x_stats_sampling_interval (float): Sampling interval for the system monitor in seconds.
x_stats_track_process_tree (bool): Monitor the entire process tree for resource usage, starting from x_stats_pid.
When True, the system monitor aggregates the RSS, CPU%, and thread count
from the process with PID x_stats_pid and all of its descendants.
This can have a performance overhead and is disabled by default.
x_update_finish_state (bool): Flag to indicate whether this process can update the run’s final state on the server. Set to False in distributed training when only the main process should determine the final state.
Defines Data Types for logging interactive visualizations to W&B.
AudioW&B class for audio clips.
Audio.__init____init__(
    data_or_path: Union[str, pathlib.Path, list, ForwardRef('np.ndarray')],
    sample_rate: Optional[int] = None,
    caption: Optional[str] = None
)
Accept a path to an audio file or a numpy array of audio data.
Args:
data_or_path:  A path to an audio file or a NumPy array of audio data.sample_rate:  Sample rate, required when passing in raw NumPy array of audio data.caption:  Caption to display with audio.Audio.durationsdurations(audio_list)
Calculate the duration of the audio files.
Audio.sample_ratessample_rates(audio_list)
Get sample rates of the audio files.
box3dbox3d(
    center: 'npt.ArrayLike',
    size: 'npt.ArrayLike',
    orientation: 'npt.ArrayLike',
    color: 'RGBColor',
    label: 'Optional[str]' = None,
    score: 'Optional[numeric]' = None
) → Box3D
Returns a Box3D.
Args:
center:  The center point of the box as a length-3 ndarray.size:  The box’s X, Y and Z dimensions as a length-3 ndarray.orientation:  The rotation transforming global XYZ coordinates  into the box’s local XYZ coordinates, given as a length-4  ndarray [r, x, y, z] corresponding to the non-zero quaternion  r + xi + yj + zk.color:  The box’s color as an (r, g, b) tuple with 0 <= r,g,b <= 1.label:  An optional label for the box.score:  An optional score for the box.HtmlW&B class for logging HTML content to W&B.
Html.__init____init__(
    data: Union[str, pathlib.Path, ForwardRef('TextIO')],
    inject: bool = True,
    data_is_not_path: bool = False
) → None
Creates a W&B HTML object.
Args: data: A string that is a path to a file with the extension “.html”, or a string or IO object containing literal HTML.
inject:  Add a stylesheet to the HTML object. If set  to False the HTML will pass through unchanged.data_is_not_path:  If set to False, the data will be  treated as a path to a file.Examples: It can be initialized by providing a path to a file:
with wandb.init() as run:
    run.log({"html": wandb.Html("./index.html")})
Alternatively, it can be initialized by providing literal HTML, in either a string or IO object:
with wandb.init() as run:
    run.log({"html": wandb.Html("<h1>Hello, world!</h1>")})
ImageA class for logging images to W&B.
Image.__init____init__(
    data_or_path: 'ImageDataOrPathType',
    mode: Optional[str] = None,
    caption: Optional[str] = None,
    grouping: Optional[int] = None,
    classes: Optional[ForwardRef('Classes'), Sequence[dict]] = None,
    boxes: Optional[Dict[str, ForwardRef('BoundingBoxes2D')], Dict[str, dict]] = None,
    masks: Optional[Dict[str, ForwardRef('ImageMask')], Dict[str, dict]] = None,
    file_type: Optional[str] = None,
    normalize: bool = True
) → None
Initialize a wandb.Image object.
Args:
data_or_path:  Accepts NumPy array/pytorch tensor of image data,  a PIL image object, or a path to an image file. If a NumPy  array or pytorch tensor is provided,  the image data will be saved to the given file type.  If the values are not in the range [0, 255] or all values are in the range [0, 1],  the image pixel values will be normalized to the range [0, 255]  unless normalize is set to False.
mode:  The PIL mode for an image. Most common are “L”, “RGB”,"RGBA". Full explanation at https: //pillow.readthedocs.io/en/stable/handbook/concepts.html#modescaption:  Label for display of image.grouping:  The grouping number for the image.classes:  A list of class information for the image,  used for labeling bounding boxes, and image masks.boxes:  A dictionary containing bounding box information for the image.see:  https://docs.wandb.ai/ref/python/data-types/boundingboxes2d/masks:  A dictionary containing mask information for the image.see:  https://docs.wandb.ai/ref/python/data-types/imagemask/file_type:  The file type to save the image as.  This parameter has no effect if data_or_path is a path to an image file.normalize:  If True, normalize the image pixel values to fall within the range of [0, 255].  Normalize is only applied if data_or_path is a numpy array or pytorch tensor.Examples: Create a wandb.Image from a numpy array
import numpy as np
import wandb
with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(pixels, caption=f"random field {i}")
         examples.append(image)
    run.log({"examples": examples})
Create a wandb.Image from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(
             low=0, high=256, size=(100, 100, 3), dtype=np.uint8
         )
         pil_image = PILImage.fromarray(pixels, mode="RGB")
         image = wandb.Image(pil_image, caption=f"random field {i}")
         examples.append(image)
    run.log({"examples": examples})
Log .jpg rather than .png (default)
import numpy as np
import wandb
with wandb.init() as run:
    examples = []
    for i in range(3):
         pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
         image = wandb.Image(
             pixels, caption=f"random field {i}", file_type="jpg"
         )
         examples.append(image)
    run.log({"examples": examples})
MoleculeW&B class for 3D Molecular data.
Molecule.__init____init__(
    data_or_path: Union[str, pathlib.Path, ForwardRef('TextIO')],
    caption: Optional[str] = None,
    **kwargs: str
) → None
Initialize a Molecule object.
Args:
data_or_path:  Molecule can be initialized from a file name or an io object.caption:  Caption associated with the molecule for display.Object3DW&B class for 3D point clouds.
Object3D.__init____init__(
    data_or_path: Union[ForwardRef('np.ndarray'), str, pathlib.Path, ForwardRef('TextIO'), dict],
    caption: Optional[str] = None,
    **kwargs: Optional[str, ForwardRef('FileFormat3D')]
) → None
Creates a W&B Object3D object.
Args:
data_or_path:  Object3D can be initialized from a file or a numpy array.caption:  Caption associated with the object for display.Examples: The shape of the numpy array must be one of either
[[x y z],       ...] nx3
[[x y z c],     ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color
PlotlyW&B class for Plotly plots.
Plotly.__init____init__(
    val: Union[ForwardRef('plotly.Figure'), ForwardRef('matplotlib.artist.Artist')]
)
Initialize a Plotly object.
Args:
val:  Matplotlib or Plotly figure.TableThe Table class used to display and analyze tabular data.
Unlike traditional spreadsheets, Tables support numerous types of data: scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media. This means you can embed Images, Video, Audio, and other sorts of rich, annotated media directly in Tables, alongside other traditional scalar values.
This class is the primary class used to generate W&B Tables https://docs.wandb.ai/guides/models/tables/.
Table.__init____init__(
    columns=None,
    data=None,
    rows=None,
    dataframe=None,
    dtype=None,
    optional=True,
    allow_mixed_types=False,
    log_mode: Optional[Literal['IMMUTABLE', 'MUTABLE', 'INCREMENTAL']] = 'IMMUTABLE'
)
Initializes a Table object.
The rows is available for legacy reasons and should not be used. The Table class uses data to mimic the Pandas API.
Args:
columns:  (List[str]) Names of the columns in the table.  Defaults to [“Input”, “Output”, “Expected”].data:  (List[List[any]]) 2D row-oriented array of values.dataframe:  (pandas.DataFrame) DataFrame object used to create the table.  When set, data and columns arguments are ignored.rows:  (List[List[any]]) 2D row-oriented array of values.optional:  (Union[bool,List[bool]]) Determines if None values are allowed. Default to True
- If a singular bool value, then the optionality is enforced for all  columns specified at construction time
- If a list of bool values, then the optionality is applied to each  column - should be the same length as columns  applies to all columns. A list of bool values applies to each respective column.allow_mixed_types:  (bool) Determines if columns are allowed to have mixed types  (disables type validation). Defaults to Falselog_mode:  Optional[str] Controls how the Table is logged when mutations occur.  Options:
- “IMMUTABLE” (default): Table can only be logged once; subsequent  logging attempts after the table has been mutated will be no-ops.
- “MUTABLE”: Table can be re-logged after mutations, creating  a new artifact version each time it’s logged.
- “INCREMENTAL”: Table data is logged incrementally, with each log creating  a new artifact entry containing the new data since the last log.Table.add_columnadd_column(name, data, optional=False)
Adds a column of data to the table.
Args:
name:  (str) - the unique name of the columndata:  (list | np.array) - a column of homogeneous dataoptional:  (bool) - if null-like values are permittedTable.add_computed_columnsadd_computed_columns(fn)
Adds one or more computed columns based on existing data.
Args:
fn:  A function which accepts one or two parameters, ndx (int) and  row (dict), which is expected to return a dict representing  new columns for that row, keyed by the new column names.
ndx is an integer representing the index of the row. Only included if include_ndx  is set to True.row is a dictionary keyed by existing columnsTable.add_dataadd_data(*data)
Adds a new row of data to the table.
The maximum amount ofrows in a table is determined by wandb.Table.MAX_ARTIFACT_ROWS.
The length of the data should match the length of the table column.
Table.add_rowadd_row(*row)
Deprecated. Use Table.add_data method instead.
Table.castcast(col_name, dtype, optional=False)
Casts a column to a specific data type.
This can be one of the normal python classes, an internal W&B type, or an example object, like an instance of wandb.Image or wandb.Classes.
Args:
col_name (str):  The name of the column to cast.dtype (class, wandb.wandb_sdk.interface._dtypes.Type, any):  The  target dtype.optional (bool):  If the column should allow Nones.Table.get_columnget_column(name, convert_to=None)
Retrieves a column from the table and optionally converts it to a NumPy object.
Args:
name:  (str) - the name of the columnconvert_to:  (str, optional)
- “numpy”: will convert the underlying data to numpy objectTable.get_dataframeget_dataframe()
Returns a pandas.DataFrame of the table.
Table.get_indexget_index()
Returns an array of row indexes for use in other tables to create links.
VideoA class for logging videos to W&B.
Video.__init____init__(
    data_or_path: Union[str, pathlib.Path, ForwardRef('np.ndarray'), ForwardRef('TextIO'), ForwardRef('BytesIO')],
    caption: Optional[str] = None,
    fps: Optional[int] = None,
    format: Optional[Literal['gif', 'mp4', 'webm', 'ogg']] = None
)
Initialize a W&B Video object.
Args:
data_or_path:  Video can be initialized with a path to a file or an io object.  Video can be initialized with a numpy tensor. The numpy tensor  must be either 4 dimensional or 5 dimensional.  The dimensions should be (number of frames, channel, height, width) or  (batch, number of frames, channel, height, width)  The format parameter must be specified with the format argument  when initializing with a numpy array  or io object.caption:  Caption associated with the video for display.fps:  The frame rate to use when encoding raw video frames.  Default value is 4.  This parameter has no effect when data_or_path is a string, or bytes.format:  Format of video, necessary if initializing with a numpy array  or io object. This parameter will be used to determine the format  to use when encoding the video data. Accepted values are “gif”,  “mp4”, “webm”, or “ogg”.  If no value is provided, the default format will be “gif”.Examples: Log a numpy array as a video
import numpy as np
import wandb
with wandb.init() as run:
    # axes are (number of frames, channel, height, width)
    frames = np.random.randint(
         low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8
    )
    run.log({"video": wandb.Video(frames, format="mp4", fps=4)})
Create custom charts and visualizations.
barbar(
    table: 'wandb.Table',
    label: 'str',
    value: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
) → CustomChart
Constructs a bar chart from a wandb.Table of data.
Args:
table:  A table containing the data for the bar chart.label:  The name of the column to use for the labels of each bar.value:  The name of the column to use for the values of each bar.title:  The title of the bar chart.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Example:
import random
import wandb
# Generate random data for the table
data = [
    ["car", random.uniform(0, 1)],
    ["bus", random.uniform(0, 1)],
    ["road", random.uniform(0, 1)],
    ["person", random.uniform(0, 1)],
]
# Create a table with the data
table = wandb.Table(data=data, columns=["class", "accuracy"])
# Initialize a W&B run and log the bar plot
with wandb.init(project="bar_chart") as run:
    # Create a bar plot from the table
    bar_plot = wandb.plot.bar(
         table=table,
         label="class",
         value="accuracy",
         title="Object Classification Accuracy",
    )
    # Log the bar chart to W&B
    run.log({"bar_plot": bar_plot})
confusion_matrixconfusion_matrix(
    probs: 'Sequence[Sequence[float]] | None' = None,
    y_true: 'Sequence[T] | None' = None,
    preds: 'Sequence[T] | None' = None,
    class_names: 'Sequence[str] | None' = None,
    title: 'str' = 'Confusion Matrix Curve',
    split_table: 'bool' = False
) → CustomChart
Constructs a confusion matrix from a sequence of probabilities or predictions.
Args:
probs:  A sequence of predicted probabilities for each  class. The sequence shape should be (N, K) where N is the number of samples  and K is the number of classes. If provided, preds should not be provided.y_true:  A sequence of true labels.preds:  A sequence of predicted class labels. If provided,  probs should not be provided.class_names:  Sequence of class names. If not  provided, class names will be defined as “Class_1”, “Class_2”, etc.title:  Title of the confusion matrix chart.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Raises:
ValueError:  If both probs and preds are provided or if the number of  predictions and true labels are not equal. If the number of unique  predicted classes exceeds the number of class names or if the number of  unique true labels exceeds the number of class names.wandb.Error:  If numpy is not installed.Examples: Logging a confusion matrix with random probabilities for wildlife classification:
import numpy as np
import wandb
# Define class names for wildlife
wildlife_class_names = ["Lion", "Tiger", "Elephant", "Zebra"]
# Generate random true labels (0 to 3 for 10 samples)
wildlife_y_true = np.random.randint(0, 4, size=10)
# Generate random probabilities for each class (10 samples x 4 classes)
wildlife_probs = np.random.rand(10, 4)
wildlife_probs = np.exp(wildlife_probs) / np.sum(
    np.exp(wildlife_probs),
    axis=1,
    keepdims=True,
)
# Initialize W&B run and log confusion matrix
with wandb.init(project="wildlife_classification") as run:
    confusion_matrix = wandb.plot.confusion_matrix(
         probs=wildlife_probs,
         y_true=wildlife_y_true,
         class_names=wildlife_class_names,
         title="Wildlife Classification Confusion Matrix",
    )
    run.log({"wildlife_confusion_matrix": confusion_matrix})
In this example, random probabilities are used to generate a confusion matrix.
Logging a confusion matrix with simulated model predictions and 85% accuracy:
import numpy as np
import wandb
# Define class names for wildlife
wildlife_class_names = ["Lion", "Tiger", "Elephant", "Zebra"]
# Simulate true labels for 200 animal images (imbalanced distribution)
wildlife_y_true = np.random.choice(
    [0, 1, 2, 3],
    size=200,
    p=[0.2, 0.3, 0.25, 0.25],
)
# Simulate model predictions with 85% accuracy
wildlife_preds = [
    y_t
    if np.random.rand() < 0.85
    else np.random.choice([x for x in range(4) if x != y_t])
    for y_t in wildlife_y_true
]
# Initialize W&B run and log confusion matrix
with wandb.init(project="wildlife_classification") as run:
    confusion_matrix = wandb.plot.confusion_matrix(
         preds=wildlife_preds,
         y_true=wildlife_y_true,
         class_names=wildlife_class_names,
         title="Simulated Wildlife Classification Confusion Matrix",
    )
    run.log({"wildlife_confusion_matrix": confusion_matrix})
In this example, predictions are simulated with 85% accuracy to generate a confusion matrix.
histogramhistogram(
    table: 'wandb.Table',
    value: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
) → CustomChart
Constructs a histogram chart from a W&B Table.
Args:
table:  The W&B Table containing the data for the histogram.value:  The label for the bin axis (x-axis).title:  The title of the histogram plot.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Example:
import math
import random
import wandb
# Generate random data
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]
# Create a W&B Table
table = wandb.Table(
    data=data,
    columns=["step", "height"],
)
# Create a histogram plot
histogram = wandb.plot.histogram(
    table,
    value="height",
    title="My Histogram",
)
# Log the histogram plot to W&B
with wandb.init(...) as run:
    run.log({"histogram-plot1": histogram})
line_seriesline_series(
    xs: 'Iterable[Iterable[Any]] | Iterable[Any]',
    ys: 'Iterable[Iterable[Any]]',
    keys: 'Iterable[str] | None' = None,
    title: 'str' = '',
    xname: 'str' = 'x',
    split_table: 'bool' = False
) → CustomChart
Constructs a line series chart.
Args:
xs:  Sequence of x values. If a singular  array is provided, all y values are plotted against that x array. If  an array of arrays is provided, each y value is plotted against the  corresponding x array.ys:  Sequence of y values, where each iterable represents  a separate line series.keys:  Sequence of keys for labeling each line series. If  not provided, keys will be automatically generated as “line_1”,  “line_2”, etc.title:  Title of the chart.xname:  Label for the x-axis.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Examples: Logging a single x array where all y series are plotted against the same x values:
import wandb
# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    # x values shared across all y series
    xs = list(range(10))
    # Multiple y series to plot
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]
    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs,
         ys,
         title="title",
         xname="step",
    )
    run.log({"line-series-single-x": line_series_chart})
In this example, a single xs series (shared x-values) is used for all ys series. This results in each y-series being plotted against the same x-values (0-9).
Logging multiple x arrays where each y series is plotted against its corresponding x array:
import wandb
# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    # Separate x values for each y series
    xs = [
         [i for i in range(10)],  # x for first series
         [2 * i for i in range(10)],  # x for second series (stretched)
         [3 * i for i in range(10)],  # x for third series (stretched more)
    ]
    # Corresponding y series
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]
    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs, ys, title="Multiple X Arrays Example", xname="Step"
    )
    run.log({"line-series-multiple-x": line_series_chart})
In this example, each y series is plotted against its own unique x series. This allows for more flexibility when the x values are not uniform across the data series.
Customizing line labels using keys:
import wandb
# Initialize W&B run
with wandb.init(project="line_series_example") as run:
    xs = list(range(10))  # Single x array
    ys = [
         [i for i in range(10)],  # y = x
         [i**2 for i in range(10)],  # y = x^2
         [i**3 for i in range(10)],  # y = x^3
    ]
    # Custom labels for each line
    keys = ["Linear", "Quadratic", "Cubic"]
    # Generate and log the line series chart
    line_series_chart = wandb.plot.line_series(
         xs,
         ys,
         keys=keys,  # Custom keys (line labels)
         title="Custom Line Labels Example",
         xname="Step",
    )
    run.log({"line-series-custom-keys": line_series_chart})
This example shows how to provide custom labels for the lines using the keys argument. The keys will appear in the legend as “Linear”, “Quadratic”, and “Cubic”.
lineline(
    table: 'wandb.Table',
    x: 'str',
    y: 'str',
    stroke: 'str | None' = None,
    title: 'str' = '',
    split_table: 'bool' = False
) → CustomChart
Constructs a customizable line chart.
Args:
table:   The table containing data for the chart.x:  Column name for the x-axis values.y:  Column name for the y-axis values.stroke:  Column name to differentiate line strokes (e.g., for  grouping lines).title:  Title of the chart.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Example:
import math
import random
import wandb
# Create multiple series of data with different patterns
data = []
for i in range(100):
     # Series 1: Sinusoidal pattern with random noise
     data.append([i, math.sin(i / 10) + random.uniform(-0.1, 0.1), "series_1"])
     # Series 2: Cosine pattern with random noise
     data.append([i, math.cos(i / 10) + random.uniform(-0.1, 0.1), "series_2"])
     # Series 3: Linear increase with random noise
     data.append([i, i / 10 + random.uniform(-0.5, 0.5), "series_3"])
# Define the columns for the table
table = wandb.Table(data=data, columns=["step", "value", "series"])
# Initialize wandb run and log the line chart
with wandb.init(project="line_chart_example") as run:
     line_chart = wandb.plot.line(
         table=table,
         x="step",
         y="value",
         stroke="series",  # Group by the "series" column
         title="Multi-Series Line Plot",
     )
     run.log({"line-chart": line_chart})
plot_tableplot_table(
    vega_spec_name: 'str',
    data_table: 'wandb.Table',
    fields: 'dict[str, Any]',
    string_fields: 'dict[str, Any] | None' = None,
    split_table: 'bool' = False
) → CustomChart
Creates a custom charts using a Vega-Lite specification and a wandb.Table.
This function creates a custom chart based on a Vega-Lite specification and a data table represented by a wandb.Table object. The specification needs to be predefined and stored in the W&B backend. The function returns a custom chart object that can be logged to W&B using wandb.Run.log().
Args:
vega_spec_name:  The name or identifier of the Vega-Lite spec  that defines the visualization structure.data_table:  A wandb.Table object containing the data to be  visualized.fields:  A mapping between the fields in the Vega-Lite spec and the  corresponding columns in the data table to be visualized.string_fields:  A dictionary for providing values for any string constants  required by the custom visualization.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass the chart object as argument to wandb.Run.log().Raises:
wandb.Error:  If data_table is not a wandb.Table object.Example:
# Create a custom chart using a Vega-Lite spec and the data table.
import wandb
data = [[1, 1], [2, 2], [3, 3], [4, 4], [5, 5]]
table = wandb.Table(data=data, columns=["x", "y"])
fields = {"x": "x", "y": "y", "title": "MY TITLE"}
with wandb.init() as run:
   # Training code goes here
   # Create a custom title with `string_fields`.
   my_custom_chart = wandb.plot_table(
        vega_spec_name="wandb/line/v0",
        data_table=table,
        fields=fields,
        string_fields={"title": "Title"},
   )
   run.log({"custom_chart": my_custom_chart})
pr_curvepr_curve(
    y_true: 'Iterable[T] | None' = None,
    y_probas: 'Iterable[numbers.Number] | None' = None,
    labels: 'list[str] | None' = None,
    classes_to_plot: 'list[T] | None' = None,
    interp_size: 'int' = 21,
    title: 'str' = 'Precision-Recall Curve',
    split_table: 'bool' = False
) → CustomChart
Constructs a Precision-Recall (PR) curve.
The Precision-Recall curve is particularly useful for evaluating classifiers on imbalanced datasets. A high area under the PR curve signifies both high precision (a low false positive rate) and high recall (a low false negative rate). The curve provides insights into the balance between false positives and false negatives at various threshold levels, aiding in the assessment of a model’s performance.
Args:
y_true:  True binary labels. The shape should be (num_samples,).y_probas:  Predicted scores or probabilities for each class.  These can be probability estimates, confidence scores, or non-thresholded  decision values. The shape should be (num_samples, num_classes).labels:  Optional list of class names to replace  numeric values in y_true for easier plot interpretation.  For example, labels = ['dog', 'cat', 'owl'] will replace 0 with  ‘dog’, 1 with ‘cat’, and 2 with ‘owl’ in the plot. If not provided,  numeric values from y_true will be used.classes_to_plot:  Optional list of unique class values from  y_true to be included in the plot. If not specified, all unique  classes in y_true will be plotted.interp_size:  Number of points to interpolate recall values. The  recall values will be fixed to interp_size uniformly distributed  points in the range [0, 1], and the precision will be interpolated  accordingly.title:  Title of the plot. Defaults to “Precision-Recall Curve”.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Raises:
wandb.Error:  If NumPy, pandas, or scikit-learn is not installed.Example:
import wandb
# Example for spam detection (binary classification)
y_true = [0, 1, 1, 0, 1]  # 0 = not spam, 1 = spam
y_probas = [
    [0.9, 0.1],  # Predicted probabilities for the first sample (not spam)
    [0.2, 0.8],  # Second sample (spam), and so on
    [0.1, 0.9],
    [0.8, 0.2],
    [0.3, 0.7],
]
labels = ["not spam", "spam"]  # Optional class names for readability
with wandb.init(project="spam-detection") as run:
    pr_curve = wandb.plot.pr_curve(
         y_true=y_true,
         y_probas=y_probas,
         labels=labels,
         title="Precision-Recall Curve for Spam Detection",
    )
    run.log({"pr-curve": pr_curve})
roc_curveroc_curve(
    y_true: 'Sequence[numbers.Number]',
    y_probas: 'Sequence[Sequence[float]] | None' = None,
    labels: 'list[str] | None' = None,
    classes_to_plot: 'list[numbers.Number] | None' = None,
    title: 'str' = 'ROC Curve',
    split_table: 'bool' = False
) → CustomChart
Constructs Receiver Operating Characteristic (ROC) curve chart.
Args:
y_true:  The true class labels (ground truth)  for the target variable. Shape should be (num_samples,).y_probas:  The predicted probabilities or  decision scores for each class. Shape should be (num_samples, num_classes).labels:  Human-readable labels corresponding to the class  indices in y_true. For example, if labels=['dog', 'cat'],  class 0 will be displayed as ‘dog’ and class 1 as ‘cat’ in the plot.  If None, the raw class indices from y_true will be used.  Default is None.classes_to_plot:  A subset of unique class labels  to include in the ROC curve. If None, all classes in y_true will  be plotted. Default is None.title:  Title of the ROC curve plot. Default is “ROC Curve”.split_table:  Whether the table should be split into a separate  section in the W&B UI. If True, the table will be displayed in a  section named “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Raises:
wandb.Error:  If numpy, pandas, or scikit-learn are not found.Example:
import numpy as np
import wandb
# Simulate a medical diagnosis classification problem with three diseases
n_samples = 200
n_classes = 3
# True labels: assign "Diabetes", "Hypertension", or "Heart Disease" to
# each sample
disease_labels = ["Diabetes", "Hypertension", "Heart Disease"]
# 0: Diabetes, 1: Hypertension, 2: Heart Disease
y_true = np.random.choice([0, 1, 2], size=n_samples)
# Predicted probabilities: simulate predictions, ensuring they sum to 1
# for each sample
y_probas = np.random.dirichlet(np.ones(n_classes), size=n_samples)
# Specify classes to plot (plotting all three diseases)
classes_to_plot = [0, 1, 2]
# Initialize a W&B run and log a ROC curve plot for disease classification
with wandb.init(project="medical_diagnosis") as run:
   roc_plot = wandb.plot.roc_curve(
        y_true=y_true,
        y_probas=y_probas,
        labels=disease_labels,
        classes_to_plot=classes_to_plot,
        title="ROC Curve for Disease Classification",
   )
   run.log({"roc-curve": roc_plot})
scatterscatter(
    table: 'wandb.Table',
    x: 'str',
    y: 'str',
    title: 'str' = '',
    split_table: 'bool' = False
) → CustomChart
Constructs a scatter plot from a wandb.Table of data.
Args:
table:  The W&B Table containing the data to visualize.x:  The name of the column used for the x-axis.y:  The name of the column used for the y-axis.title:  The title of the scatter chart.split_table:  Whether the table should be split into a separate section  in the W&B UI. If True, the table will be displayed in a section named  “Custom Chart Tables”. Default is False.Returns:
CustomChart:  A custom chart object that can be logged to W&B. To log the  chart, pass it to wandb.log().Example:
import math
import random
import wandb
# Simulate temperature variations at different altitudes over time
data = [
   [i, random.uniform(-10, 20) - 0.005 * i + 5 * math.sin(i / 50)]
   for i in range(300)
]
# Create W&B table with altitude (m) and temperature (°C) columns
table = wandb.Table(data=data, columns=["altitude (m)", "temperature (°C)"])
# Initialize W&B run and log the scatter plot
with wandb.init(project="temperature-altitude-scatter") as run:
   # Create and log the scatter plot
   scatter_plot = wandb.plot.scatter(
        table=table,
        x="altitude (m)",
        y="temperature (°C)",
        title="Altitude vs Temperature",
   )
   run.log({"altitude-temperature-scatter": scatter_plot})
Query and analyze data logged to W&B.
wandb.apis.publicUse the Public API to export or update data that you have saved to W&B.
Before using this API, you’ll want to log data from your script — check the Quickstart for more details.
You might use the Public API to
ready-to-deploy.For more on using the Public API, check out our guide.
ApiUsed for querying the W&B server.
Examples:
import wandb
wandb.Api()
Api.__init____init__(
    overrides: Optional[Dict[str, Any]] = None,
    timeout: Optional[int] = None,
    api_key: Optional[str] = None
) → None
Initialize the API.
Args:
overrides:  You can set base_url if you areusing a W&B server other than https: //api.wandb.ai. You can also set defaults for entity, project, and run.timeout:  HTTP timeout in seconds for API requests. If not  specified, the default timeout will be used.api_key:  API key to use for authentication. If not provided,  the API key from the current environment or configuration will be used.Returns W&B API key.
Returns the client object.
Returns the default W&B entity.
Returns W&B public user agent.
Returns the viewer object.
Raises:
ValueError:  If viewer data is not able to be fetched from W&B.requests.RequestException:  If an error occurs while making the graphql request.Api.artifactartifact(name: str, type: Optional[str] = None)
Returns a single artifact.
Args:
name:  The artifact’s name. The name of an artifact resembles a  filepath that consists, at a minimum, the name of the project  the artifact was logged to, the name of the artifact, and the  artifact’s version or alias. Optionally append the entity that  logged the artifact as a prefix followed by a forward slash.  If no entity is specified in the name, the Run or API  setting’s entity is used.type:  The type of artifact to fetch.Returns:
An Artifact object.
Raises:
ValueError:  If the artifact name is not specified.ValueError:  If the artifact type is specified but does not  match the type of the fetched artifact.Examples: In the proceeding code snippets “entity”, “project”, “artifact”, “version”, and “alias” are placeholders for your W&B entity, name of the project the artifact is in, the name of the artifact, and artifact’s version, respectively.
import wandb
# Specify the project, artifact's name, and the artifact's alias
wandb.Api().artifact(name="project/artifact:alias")
# Specify the project, artifact's name, and a specific artifact version
wandb.Api().artifact(name="project/artifact:version")
# Specify the entity, project, artifact's name, and the artifact's alias
wandb.Api().artifact(name="entity/project/artifact:alias")
# Specify the entity, project, artifact's name, and a specific artifact version
wandb.Api().artifact(name="entity/project/artifact:version")
Note:
This method is intended for external use only. Do not call
api.artifact()within the wandb repository code.
Api.artifact_collectionartifact_collection(type_name: str, name: str) → public.ArtifactCollection
Returns a single artifact collection by type.
You can use the returned ArtifactCollection object to retrieve information about specific artifacts in that collection, and more.
Args:
type_name:  The type of artifact collection to fetch.name:  An artifact collection name. Optionally append the entity  that logged the artifact as a prefix followed by a forward  slash.Returns:
An ArtifactCollection object.
Examples: In the proceeding code snippet “type”, “entity”, “project”, and “artifact_name” are placeholders for the collection type, your W&B entity, name of the project the artifact is in, and the name of the artifact, respectively.
import wandb
collections = wandb.Api().artifact_collection(
    type_name="type", name="entity/project/artifact_name"
)
# Get the first artifact in the collection
artifact_example = collections.artifacts()[0]
# Download the contents of the artifact to the specified root directory.
artifact_example.download()
Api.artifact_collection_existsartifact_collection_exists(name: str, type: str) → bool
Whether an artifact collection exists within a specified project and entity.
Args:
name:  An artifact collection name. Optionally append the  entity that logged the artifact as a prefix followed by  a forward slash. If entity or project is not specified,  infer the collection from the override params if they exist.  Otherwise, entity is pulled from the user settings and project  will default to “uncategorized”.type:  The type of artifact collection.Returns: True if the artifact collection exists, False otherwise.
Examples: In the proceeding code snippet “type”, and “collection_name” refer to the type of the artifact collection and the name of the collection, respectively.
import wandb
wandb.Api.artifact_collection_exists(type="type", name="collection_name")
Api.artifact_collectionsartifact_collections(
    project_name: str,
    type_name: str,
    per_page: int = 50
) → public.ArtifactCollections
Returns a collection of matching artifact collections.
Args:
project_name:  The name of the project to filter on.type_name:  The name of the artifact type to filter on.per_page:  Sets the page size for query pagination.  None will use the default size.  Usually there is no reason to change this.Returns:
An iterable ArtifactCollections object.
Api.artifact_existsartifact_exists(name: str, type: Optional[str] = None) → bool
Whether an artifact version exists within the specified project and entity.
Args:
name:  The name of artifact. Add the artifact’s entity and project  as a prefix. Append the version or the alias of the artifact  with a colon. If the entity or project is not specified,  W&B uses override parameters if populated. Otherwise, the  entity is pulled from the user settings and the project is  set to “Uncategorized”.type:  The type of artifact.Returns: True if the artifact version exists, False otherwise.
Examples: In the proceeding code snippets “entity”, “project”, “artifact”, “version”, and “alias” are placeholders for your W&B entity, name of the project the artifact is in, the name of the artifact, and artifact’s version, respectively.
import wandb
wandb.Api().artifact_exists("entity/project/artifact:version")
wandb.Api().artifact_exists("entity/project/artifact:alias")
Api.artifact_typeartifact_type(
    type_name: str,
    project: Optional[str] = None
) → public.ArtifactType
Returns the matching ArtifactType.
Args:
type_name:  The name of the artifact type to retrieve.project:  If given, a project name or path to filter on.Returns:
An ArtifactType object.
Api.artifact_typesartifact_types(project: Optional[str] = None) → public.ArtifactTypes
Returns a collection of matching artifact types.
Args:
project:  The project name or path to filter on.Returns:
An iterable ArtifactTypes object.
Api.artifact_versionsartifact_versions(type_name, name, per_page=50)
Deprecated. Use Api.artifacts(type_name, name) method instead.
Api.artifactsartifacts(
    type_name: str,
    name: str,
    per_page: int = 50,
    tags: Optional[List[str]] = None
) → public.Artifacts
Return an Artifacts collection.
Args:
type_name: The type of artifacts to fetch. name: The artifact’s collection name. Optionally append the  entity that logged the artifact as a prefix followed by  a forward slash. per_page: Sets the page size for query pagination. If set to  None, use the default size. Usually there is no reason  to change this. tags: Only return artifacts with all of these tags.
Returns:
An iterable Artifacts object.
Examples: In the proceeding code snippet, “type”, “entity”, “project”, and “artifact_name” are placeholders for the artifact type, W&B entity, name of the project the artifact was logged to, and the name of the artifact, respectively.
import wandb
wandb.Api().artifacts(type_name="type", name="entity/project/artifact_name")
Api.automationautomation(name: str, entity: Optional[str] = None) → Automation
Returns the only Automation matching the parameters.
Args:
name:  The name of the automation to fetch.entity:  The entity to fetch the automation for.Raises:
ValueError:  If zero or multiple Automations match the search criteria.Examples: Get an existing automation named “my-automation”:
import wandb
api = wandb.Api()
automation = api.automation(name="my-automation")
Get an existing automation named “other-automation”, from the entity “my-team”:
automation = api.automation(name="other-automation", entity="my-team")
Api.automationsautomations(
    entity: Optional[str] = None,
    name: Optional[str] = None,
    per_page: int = 50
) → Iterator[ForwardRef('Automation')]
Returns an iterator over all Automations that match the given parameters.
If no parameters are provided, the returned iterator will contain all Automations that the user has access to.
Args:
entity:  The entity to fetch the automations for.name:  The name of the automation to fetch.per_page:  The number of automations to fetch per page.  Defaults to 50.  Usually there is no reason to change this.Returns: A list of automations.
Examples: Fetch all existing automations for the entity “my-team”:
import wandb
api = wandb.Api()
automations = api.automations(entity="my-team")
Api.create_automationcreate_automation(
    obj: 'NewAutomation',
    fetch_existing: bool = False,
    **kwargs: typing_extensions.Unpack[ForwardRef('WriteAutomationsKwargs')]
) → Automation
Create a new Automation.
Args:
obj:  The automation to create.  fetch_existing:  If True, and a conflicting automation already exists, attempt  to fetch the existing automation instead of raising an error.  **kwargs:  Any additional values to assign to the automation before  creating it.  If given, these will override any values that may  already be set on the automation:
- name: The name of the automation.
- description: The description of the automation.
- enabled: Whether the automation is enabled.
- scope: The scope of the automation.
- event: The event that triggers the automation.
- action: The action that is triggered by the automation.
Returns: The saved Automation.
Examples: Create a new automation named “my-automation” that sends a Slack notification when a run within a specific project logs a metric exceeding a custom threshold:
import wandb
from wandb.automations import OnRunMetric, RunEvent, SendNotification
api = wandb.Api()
project = api.project("my-project", entity="my-team")
# Use the first Slack integration for the team
slack_hook = next(api.slack_integrations(entity="my-team"))
event = OnRunMetric(
     scope=project,
     filter=RunEvent.metric("custom-metric") > 10,
)
action = SendNotification.from_integration(slack_hook)
automation = api.create_automation(
     event >> action,
     name="my-automation",
     description="Send a Slack message whenever 'custom-metric' exceeds 10.",
)
Api.create_custom_chartcreate_custom_chart(
    entity: str,
    name: str,
    display_name: str,
    spec_type: Literal['vega2'],
    access: Literal['private', 'public'],
    spec: Union[str, dict]
) → str
Create a custom chart preset and return its id.
Args:
entity:  The entity (user or team) that owns the chartname:  Unique identifier for the chart presetdisplay_name:  Human-readable name shown in the UIspec_type:  Type of specification. Must be “vega2” for Vega-Lite v2 specifications.access:  Access level for the chart:
- “private”: Chart is only accessible to the entity that created it
- “public”: Chart is publicly accessiblespec:  The Vega/Vega-Lite specification as a dictionary or JSON stringReturns: The ID of the created chart preset in the format “entity/name”
Raises:
wandb.Error:  If chart creation failsUnsupportedError:  If the server doesn’t support custom chartsExample:
   import wandb
   api = wandb.Api()
   # Define a simple bar chart specification
   vega_spec = {
        "$schema": "https://vega.github.io/schema/vega-lite/v6.json",
        "mark": "bar",
        "data": {"name": "wandb"},
        "encoding": {
            "x": {"field": "${field:x}", "type": "ordinal"},
            "y": {"field": "${field:y}", "type": "quantitative"},
        },
   }
   # Create the custom chart
   chart_id = api.create_custom_chart(
        entity="my-team",
        name="my-bar-chart",
        display_name="My Custom Bar Chart",
        spec_type="vega2",
        access="private",
        spec=vega_spec,
   )
   # Use with wandb.plot_table()
   chart = wandb.plot_table(
        vega_spec_name=chart_id,
        data_table=my_table,
        fields={"x": "category", "y": "value"},
   )
   ``` 
---
### <kbd>method</kbd> `Api.create_project`
```python
create_project(name: str, entity: str) → None
Create a new project.
Args:
name:  The name of the new project.entity:  The entity of the new project.Api.create_registrycreate_registry(
    name: str,
    visibility: Literal['organization', 'restricted'],
    organization: Optional[str] = None,
    description: Optional[str] = None,
    artifact_types: Optional[List[str]] = None
) → Registry
Create a new registry.
Args:
name:  The name of the registry. Name must be unique within the organization.visibility:  The visibility of the registry.organization:  Anyone in the organization can view this registry. You can  edit their roles later from the settings in the UI.restricted:  Only invited members via the UI can access this registry.  Public sharing is disabled.organization:  The organization of the registry.  If no organization is set in the settings, the organization will be  fetched from the entity if the entity only belongs to one organization.description:  The description of the registry.artifact_types:  The accepted artifact types of the registry. A type is nomore than 128 characters and do not include characters /or ``:. If not specified, all types are accepted. Allowed types added to the registry cannot be removed later.Returns: A registry object.
Examples:
import wandb
api = wandb.Api()
registry = api.create_registry(
   name="my-registry",
   visibility="restricted",
   organization="my-org",
   description="This is a test registry",
   artifact_types=["model"],
)
Api.create_runcreate_run(
    run_id: Optional[str] = None,
    project: Optional[str] = None,
    entity: Optional[str] = None
) → public.Run
Create a new run.
Args:
run_id:  The ID to assign to the run. If not specified, W&B  creates a random ID.project:  The project where to log the run to. If no project is specified,  log the run to a project called “Uncategorized”.entity:  The entity that owns the project. If no entity is  specified, log the run to the default entity.Returns:
The newly created Run.
Api.create_run_queuecreate_run_queue(
    name: str,
    type: 'public.RunQueueResourceType',
    entity: Optional[str] = None,
    prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None,
    config: Optional[dict] = None,
    template_variables: Optional[dict] = None
) → public.RunQueue
Create a new run queue in W&B Launch.
Args:
name:  Name of the queue to createtype:  Type of resource to be used for the queue. One of  “local-container”, “local-process”, “kubernetes”,“sagemaker”,  or “gcp-vertex”.entity:  Name of the entity to create the queue. If None, use  the configured or default entity.prioritization_mode:  Version of prioritization to use.  Either “V0” or None.config:  Default resource configuration to be used for the queue.  Use handlebars (eg. {{var}}) to specify template variables.template_variables:  A dictionary of template variable schemas to  use with the config.Returns:
The newly created RunQueue.
Raises:
ValueError if any of the parameters are invalid wandb.Error on wandb API errors
Api.create_teamcreate_team(team: str, admin_username: Optional[str] = None) → public.Team
Create a new team.
Args:
team:  The name of the teamadmin_username:  Username of the admin user of the team.  Defaults to the current user.Returns:
A Team object.
Api.create_usercreate_user(email: str, admin: Optional[bool] = False)
Create a new user.
Args:
email:  The email address of the user.admin:  Set user as a global instance administrator.Returns:
A User object.
Api.delete_automationdelete_automation(obj: Union[ForwardRef('Automation'), str]) → Literal[True]
Delete an automation.
Args:
obj:  The automation to delete, or its ID.Returns: True if the automation was deleted successfully.
Api.flushflush()
Flush the local cache.
The api object keeps a local cache of runs, so if the state of the run may change while executing your script you must clear the local cache with api.flush() to get the latest values associated with the run.
Api.from_pathfrom_path(path: str)
Return a run, sweep, project or report from a path.
Args:
path:  The path to the project, run, sweep or reportReturns:
A Project, Run, Sweep, or BetaReport instance.
Raises:
wandb.Error if path is invalid or the object doesn’t exist.
Examples: In the proceeding code snippets “project”, “team”, “run_id”, “sweep_id”, and “report_name” are placeholders for the project, team, run ID, sweep ID, and the name of a specific report, respectively.
import wandb
api = wandb.Api()
project = api.from_path("project")
team_project = api.from_path("team/project")
run = api.from_path("team/project/runs/run_id")
sweep = api.from_path("team/project/sweeps/sweep_id")
report = api.from_path("team/project/reports/report_name")
Api.integrationsintegrations(
    entity: Optional[str] = None,
    per_page: int = 50
) → Iterator[ForwardRef('Integration')]
Return an iterator of all integrations for an entity.
Args:
entity:  The entity (e.g. team name) for which to  fetch integrations.  If not provided, the user’s default entity  will be used.per_page:  Number of integrations to fetch per page.  Defaults to 50.  Usually there is no reason to change this.Yields:
Iterator[SlackIntegration | WebhookIntegration]:  An iterator of any supported integrations.Api.jobjob(name: Optional[str], path: Optional[str] = None) → public.Job
Return a Job object.
Args:
name:  The name of the job.path:  The root path to download the job artifact.Returns:
A Job object.
Api.list_jobslist_jobs(entity: str, project: str) → List[Dict[str, Any]]
Return a list of jobs, if any, for the given entity and project.
Args:
entity:  The entity for the listed jobs.project:  The project for the listed jobs.Returns: A list of matching jobs.
Api.projectproject(name: str, entity: Optional[str] = None) → public.Project
Return the Project with the given name (and entity, if given).
Args:
name:  The project name.entity:  Name of the entity requested.  If None, will fall back to the  default entity passed to Api.  If no default entity, will  raise a ValueError.Returns:
A Project object.
Api.projectsprojects(entity: Optional[str] = None, per_page: int = 200) → public.Projects
Get projects for a given entity.
Args:
entity:  Name of the entity requested.  If None, will fall back to  the default entity passed to Api.  If no default entity,  will raise a ValueError.per_page:  Sets the page size for query pagination. If set to None,  use the default size. Usually there is no reason to change this.Returns:
A Projects object which is an iterable collection of Projectobjects.
Api.queued_runqueued_run(
    entity: str,
    project: str,
    queue_name: str,
    run_queue_item_id: str,
    project_queue=None,
    priority=None
)
Return a single queued run based on the path.
Parses paths of the form entity/project/queue_id/run_queue_item_id.
Api.registriesregistries(
    organization: Optional[str] = None,
    filter: Optional[Dict[str, Any]] = None
) → Registries
Returns a lazy iterator of Registry objects.
Use the iterator to search and filter registries, collections, or artifact versions across your organization’s registry.
Args:
organization:  (str, optional) The organization of the registry to fetch.  If not specified, use the organization specified in the user’s settings.filter:  (dict, optional) MongoDB-style filter to apply to each object in the lazy registry iterator.  Fields available to filter for registries are  name, description, created_at, updated_at.  Fields available to filter for collections are  name, tag, description, created_at, updated_at  Fields available to filter for versions are  tag, alias, created_at, updated_at, metadataReturns:
A lazy iterator of Registry objects.
Examples: Find all registries with the names that contain “model”
import wandb
api = wandb.Api()  # specify an org if your entity belongs to multiple orgs
api.registries(filter={"name": {"$regex": "model"}})
Find all collections in the registries with the name “my_collection” and the tag “my_tag”
api.registries().collections(filter={"name": "my_collection", "tag": "my_tag"})
Find all artifact versions in the registries with a collection name that contains “my_collection” and a version that has the alias “best”
api.registries().collections(
    filter={"name": {"$regex": "my_collection"}}
).versions(filter={"alias": "best"})
Find all artifact versions in the registries that contain “model” and have the tag “prod” or alias “best”
api.registries(filter={"name": {"$regex": "model"}}).versions(
    filter={"$or": [{"tag": "prod"}, {"alias": "best"}]}
)
Api.registryregistry(name: str, organization: Optional[str] = None) → Registry
Return a registry given a registry name.
Args:
name:  The name of the registry. This is without the wandb-registry-  prefix.organization:  The organization of the registry.  If no organization is set in the settings, the organization will be  fetched from the entity if the entity only belongs to one  organization.Returns: A registry object.
Examples: Fetch and update a registry
import wandb
api = wandb.Api()
registry = api.registry(name="my-registry", organization="my-org")
registry.description = "This is an updated description"
registry.save()
Api.reportsreports(
    path: str = '',
    name: Optional[str] = None,
    per_page: int = 50
) → public.Reports
Get reports for a given project path.
Note: wandb.Api.reports() API is in beta and will likely change in future releases.
Args:
path:  The path to the project the report resides in. Specify the  entity that created the project as a prefix followed by a  forward slash.name:  Name of the report requested.per_page:  Sets the page size for query pagination. If set to  None, use the default size. Usually there is no reason to  change this.Returns:
A Reports object which is an iterable collection of  BetaReport objects.
Examples:
import wandb
wandb.Api.reports("entity/project")
Api.runrun(path='')
Return a single run by parsing path in the form entity/project/run_id.
Args:
path:  Path to run in the form entity/project/run_id.  If api.entity is set, this can be in the form project/run_id  and if api.project is set this can just be the run_id.Returns:
A Run object.
Api.run_queuerun_queue(entity: str, name: str)
Return the named RunQueue for entity.
See Api.create_run_queue for more information on how to create a run queue.
Api.runsruns(
    path: Optional[str] = None,
    filters: Optional[Dict[str, Any]] = None,
    order: str = '+created_at',
    per_page: int = 50,
    include_sweeps: bool = True
)
Returns a Runs object, which lazily iterates over Run objects.
Fields you can filter by include:
createdAt: The timestamp when the run was created. (in ISO 8601 format, e.g. “2023-01-01T12:00:00Z”)displayName: The human-readable display name of the run. (e.g. “eager-fox-1”)duration: The total runtime of the run in seconds.group: The group name used to organize related runs together.host: The hostname where the run was executed.jobType: The type of job or purpose of the run.name: The unique identifier of the run. (e.g. “a1b2cdef”)state: The current state of the run.tags: The tags associated with the run.username: The username of the user who initiated the runAdditionally, you can filter by items in the run config or summary metrics. Such as config.experiment_name, summary_metrics.loss, etc.
For more complex filtering, you can use MongoDB query operators. For details, see: https://docs.mongodb.com/manual/reference/operator/query The following operations are supported:
$and$or$nor$eq$ne$gt$gte$lt$lte$in$nin$exists$regexArgs:
path:  (str) path to project, should be in the form: “entity/project”filters:  (dict) queries for specific runs using the MongoDB query language.  You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc.For example:  {"config.experiment_name": "foo"} would find runs with a config entry  of experiment name set to “foo”order:  (str) Order can be created_at, heartbeat_at, config.*.value, or summary_metrics.*.  If you prepend order with a + order is ascending (default).  If you prepend order with a - order is descending.  The default order is run.created_at from oldest to newest.per_page:  (int) Sets the page size for query pagination.include_sweeps:  (bool) Whether to include the sweep runs in the results.Returns:
A Runs object, which is an iterable collection of Run objects.
Examples:
# Find runs in project where config.experiment_name has been set to "foo"
api.runs(path="my_entity/project", filters={"config.experiment_name": "foo"})
# Find runs in project where config.experiment_name has been set to "foo" or "bar"
api.runs(
    path="my_entity/project",
    filters={
         "$or": [
             {"config.experiment_name": "foo"},
             {"config.experiment_name": "bar"},
         ]
    },
)
# Find runs in project where config.experiment_name matches a regex
# (anchors are not supported)
api.runs(
    path="my_entity/project",
    filters={"config.experiment_name": {"$regex": "b.*"}},
)
# Find runs in project where the run name matches a regex
# (anchors are not supported)
api.runs(
    path="my_entity/project", filters={"display_name": {"$regex": "^foo.*"}}
)
# Find runs in project sorted by ascending loss
api.runs(path="my_entity/project", order="+summary_metrics.loss")
Api.slack_integrationsslack_integrations(
    entity: Optional[str] = None,
    per_page: int = 50
) → Iterator[ForwardRef('SlackIntegration')]
Returns an iterator of Slack integrations for an entity.
Args:
entity:  The entity (e.g. team name) for which to  fetch integrations.  If not provided, the user’s default entity  will be used.per_page:  Number of integrations to fetch per page.  Defaults to 50.  Usually there is no reason to change this.Yields:
Iterator[SlackIntegration]:  An iterator of Slack integrations.Examples: Get all registered Slack integrations for the team “my-team”:
import wandb
api = wandb.Api()
slack_integrations = api.slack_integrations(entity="my-team")
Find only Slack integrations that post to channel names starting with “team-alerts-”:
slack_integrations = api.slack_integrations(entity="my-team")
team_alert_integrations = [
    ig
    for ig in slack_integrations
    if ig.channel_name.startswith("team-alerts-")
]
Api.sweepsweep(path='')
Return a sweep by parsing path in the form entity/project/sweep_id.
Args:
path:  Path to sweep in the form entity/project/sweep_id.  If api.entity is set, this can be in the form  project/sweep_id and if api.project is set  this can just be the sweep_id.Returns:
A Sweep object.
Api.sync_tensorboardsync_tensorboard(root_dir, run_id=None, project=None, entity=None)
Sync a local directory containing tfevent files to wandb.
Api.teamteam(team: str) → public.Team
Return the matching Team with the given name.
Args:
team:  The name of the team.Returns:
A Team object.
Api.update_automationupdate_automation(
    obj: 'Automation',
    create_missing: bool = False,
    **kwargs: typing_extensions.Unpack[ForwardRef('WriteAutomationsKwargs')]
) → Automation
Update an existing automation.
Args:
obj:  The automation to update.  Must be an existing automation. create_missing (bool):  If True, and the automation does not exist, create it. **kwargs:  Any additional values to assign to the automation before  updating it.  If given, these will override any values that may  already be set on the automation:
- name: The name of the automation.
- description: The description of the automation.
- enabled: Whether the automation is enabled.
- scope: The scope of the automation.
- event: The event that triggers the automation.
- action: The action that is triggered by the automation.Returns: The updated automation.
Examples: Disable and edit the description of an existing automation (“my-automation”):
import wandb
api = wandb.Api()
automation = api.automation(name="my-automation")
automation.enabled = False
automation.description = "Kept for reference, but no longer used."
updated_automation = api.update_automation(automation)
OR
import wandb
api = wandb.Api()
automation = api.automation(name="my-automation")
updated_automation = api.update_automation(
    automation,
    enabled=False,
    description="Kept for reference, but no longer used.",
)
Api.upsert_run_queueupsert_run_queue(
    name: str,
    resource_config: dict,
    resource_type: 'public.RunQueueResourceType',
    entity: Optional[str] = None,
    template_variables: Optional[dict] = None,
    external_links: Optional[dict] = None,
    prioritization_mode: Optional[ForwardRef('public.RunQueuePrioritizationMode')] = None
)
Upsert a run queue in W&B Launch.
Args:
name:  Name of the queue to createentity:  Optional name of the entity to create the queue. If None,  use the configured or default entity.resource_config:  Optional default resource configuration to be used  for the queue. Use handlebars (eg. {{var}}) to specify  template variables.resource_type:  Type of resource to be used for the queue. One of  “local-container”, “local-process”, “kubernetes”, “sagemaker”,  or “gcp-vertex”.template_variables:  A dictionary of template variable schemas to  be used with the config.external_links:  Optional dictionary of external links to be used  with the queue.prioritization_mode:  Optional version of prioritization to use.  Either “V0” or NoneReturns:
The upserted RunQueue.
Raises: ValueError if any of the parameters are invalid wandb.Error on wandb API errors
Api.useruser(username_or_email: str) → Optional[ForwardRef('public.User')]
Return a user from a username or email address.
This function only works for local administrators. Use api.viewer  to get your own user object.
Args:
username_or_email:  The username or email address of the user.Returns:
A User object or None if a user is not found.
Api.usersusers(username_or_email: str) → List[ForwardRef('public.User')]
Return all users from a partial username or email address query.
This function only works for local administrators. Use api.viewer  to get your own user object.
Args:
username_or_email:  The prefix or suffix of the user you want to find.Returns:
An array of User objects.
Api.webhook_integrationswebhook_integrations(
    entity: Optional[str] = None,
    per_page: int = 50
) → Iterator[ForwardRef('WebhookIntegration')]
Returns an iterator of webhook integrations for an entity.
Args:
entity:  The entity (e.g. team name) for which to  fetch integrations.  If not provided, the user’s default entity  will be used.per_page:  Number of integrations to fetch per page.  Defaults to 50.  Usually there is no reason to change this.Yields:
Iterator[WebhookIntegration]:  An iterator of webhook integrations.Examples: Get all registered webhook integrations for the team “my-team”:
import wandb
api = wandb.Api()
webhook_integrations = api.webhook_integrations(entity="my-team")
Find only webhook integrations that post requests to “https://my-fake-url.com”:
webhook_integrations = api.webhook_integrations(entity="my-team")
my_webhooks = [
    ig
    for ig in webhook_integrations
    if ig.url_endpoint.startswith("https://my-fake-url.com")
]
wandb.apis.publicW&B Public API for Artifact objects.
This module provides classes for interacting with W&B artifacts and their collections.
ArtifactTypesAn lazy iterator of ArtifactType objects for a specific project.
ArtifactTypeAn artifact object that satisfies query based on the specified type.
Args:
client:  The client instance to use for querying W&B.entity:  The entity (user or team) that owns the project.project:  The name of the project to query for artifact types.type_name:  The name of the artifact type.attrs:  Optional mapping of attributes to initialize the artifact type. If not provided,  the object will load its attributes from W&B upon initialization.The unique identifier of the artifact type.
The name of the artifact type.
ArtifactType.collectioncollection(name: 'str') → ArtifactCollection
Get a specific artifact collection by name.
Args:
name (str):  The name of the artifact collection to retrieve.ArtifactType.collectionscollections(per_page: 'int' = 50) → ArtifactCollections
Get all artifact collections associated with this artifact type.
Args:
per_page (int):  The number of artifact collections to fetch per page.  Default is 50.ArtifactCollectionsArtifact collections of a specific type in a project.
Args:
client:  The client instance to use for querying W&B.entity:  The entity (user or team) that owns the project.project:  The name of the project to query for artifact collections.type_name:  The name of the artifact type for which to fetch collections.per_page:  The number of artifact collections to fetch per page. Default is 50.ArtifactCollectionAn artifact collection that represents a group of related artifacts.
Args:
client:  The client instance to use for querying W&B.entity:  The entity (user or team) that owns the project.project:  The name of the project to query for artifact collections.name:  The name of the artifact collection.type:  The type of the artifact collection (e.g., “dataset”, “model”).organization:  Optional organization name if applicable.attrs:  Optional mapping of attributes to initialize the artifact collection.  If not provided, the object will load its attributes from W&B upon  initialization.Artifact Collection Aliases.
The creation date of the artifact collection.
A description of the artifact collection.
The unique identifier of the artifact collection.
The name of the artifact collection.
The tags associated with the artifact collection.
Returns the type of the artifact collection.
ArtifactCollection.artifactsartifacts(per_page: 'int' = 50) → Artifacts
Get all artifacts in the collection.
ArtifactCollection.change_typechange_type(new_type: 'str') → None
Deprecated, change type directly with save instead.
ArtifactCollection.deletedelete() → None
Delete the entire artifact collection.
ArtifactCollection.is_sequenceis_sequence() → bool
Return whether the artifact collection is a sequence.
ArtifactCollection.savesave() → None
Persist any changes made to the artifact collection.
ArtifactsAn iterable collection of artifact versions associated with a project.
Optionally pass in filters to narrow down the results based on specific criteria.
Args:
client:  The client instance to use for querying W&B.entity:  The entity (user or team) that owns the project.project:  The name of the project to query for artifacts.collection_name:  The name of the artifact collection to query.type:  The type of the artifacts to query. Common examples include  “dataset” or “model”.filters:  Optional mapping of filters to apply to the query.order:  Optional string to specify the order of the results.per_page:  The number of artifact versions to fetch per page. Default is 50.tags:  Optional string or list of strings to filter artifacts by tags.RunArtifactsAn iterable collection of artifacts associated with a specific run.
ArtifactFilesA paginator for files in an artifact.
Returns the path of the artifact.
wandb.apis.publicW&B Public API for Automation objects.
AutomationsAn lazy iterator of Automation objects.
wandb.apis.publicW&B Public API for File objects.
This module provides classes for interacting with files stored in W&B.
Example:
from wandb.apis.public import Api
# Get files from a specific run
run = Api().run("entity/project/run_id")
files = run.files()
# Work with files
for file in files:
    print(f"File: {file.name}")
    print(f"Size: {file.size} bytes")
    print(f"Type: {file.mimetype}")
    # Download file
    if file.size < 1000000:  # Less than 1MB
        file.download(root="./downloads")
    # Get S3 URI for large files
    if file.size >= 1000000:
        print(f"S3 URI: {file.path_uri}")
Note:
This module is part of the W&B Public API and provides methods to access, download, and manage files stored in W&B. Files are typically associated with specific runs and can include model weights, datasets, visualizations, and other artifacts.
FilesA lazy iterator over a collection of File objects.
Access and manage files uploaded to W&B during a run. Handles pagination automatically when iterating through large collections of files.
Example:
from wandb.apis.public.files import Files
from wandb.apis.public.api import Api
# Example run object
run = Api().run("entity/project/run-id")
# Create a Files object to iterate over files in the run
files = Files(api.client, run)
# Iterate over files
for file in files:
    print(file.name)
    print(file.url)
    print(file.size)
    # Download the file
    file.download(root="download_directory", replace=True)
Files.__init____init__(
    client: 'RetryingClient',
    run: 'Run',
    names: 'list[str] | None' = None,
    per_page: 'int' = 50,
    upload: 'bool' = False,
    pattern: 'str | None' = None
)
Initialize a lazy iterator over a collection of File objects.
Files are retrieved in pages from the W&B server as needed.
Args:
client: The run object that contains the files run: The run object that contains the files names (list, optional): A list of file names to filter the files per_page (int, optional): The number of files to fetch per page upload (bool, optional): If True, fetch the upload URL for each file pattern (str, optional): Pattern to match when returning files from W&B  This pattern uses mySQL’s LIKE syntax,  so matching all files that end with .json would be “%.json”.  If both names and pattern are provided, a ValueError will be raised.
wandb.apis.publicW&B Public API for Run History.
This module provides classes for efficiently scanning and sampling run history data.
Note:
This module is part of the W&B Public API and provides methods to access run history data. It handles pagination automatically and offers both complete and sampled access to metrics logged during training runs.
wandb.apis.publicW&B Public API for integrations.
This module provides classes for interacting with W&B integrations.
IntegrationsAn lazy iterator of Integration objects.
Integrations.__init____init__(client: '_Client', variables: 'dict[str, Any]', per_page: 'int' = 50)
Integrations.convert_objectsconvert_objects() → Iterable[Integration]
Parse the page data into a list of integrations.
wandb.apis.publicW&B Public API for Project objects.
This module provides classes for interacting with W&B projects and their associated data.
Example:
from wandb.apis.public import Api
# Get all projects for an entity
projects = Api().projects("entity")
# Access project data
for project in projects:
    print(f"Project: {project.name}")
    print(f"URL: {project.url}")
    # Get artifact types
    for artifact_type in project.artifacts_types():
        print(f"Artifact Type: {artifact_type.name}")
    # Get sweeps
    for sweep in project.sweeps():
        print(f"Sweep ID: {sweep.id}")
        print(f"State: {sweep.state}")
Note:
This module is part of the W&B Public API and provides methods to access and manage projects. For creating new projects, use wandb.init() with a new project name.
ProjectsAn lazy iterator of Project objects.
An iterable interface to access projects created and saved by the entity.
Args:
client (wandb.apis.internal.Api):  The API client instance to use.entity (str):  The entity name (username or team) to fetch projects for.per_page (int):  Number of projects to fetch per request (default is 50).Example:
from wandb.apis.public.api import Api
# Find projects that belong to this entity
projects = Api().projects(entity="entity")
# Iterate over files
for project in projects:
   print(f"Project: {project.name}")
   print(f"- URL: {project.url}")
   print(f"- Created at: {project.created_at}")
   print(f"- Is benchmark: {project.is_benchmark}")
Projects.__init____init__(
    client: wandb.apis.public.api.RetryingClient,
    entity: str,
    per_page: int = 50
) → Projects
An iterable collection of Project objects.
Args:
client:  The API client used to query W&B.entity:  The entity which owns the projects.per_page:  The number of projects to fetch per request to the API.ProjectA project is a namespace for runs.
Args:
client:  W&B API client instance.name (str):  The name of the project.entity (str):  The entity name that owns the project.Project.__init____init__(
    client: wandb.apis.public.api.RetryingClient,
    entity: str,
    project: str,
    attrs: dict
) → Project
A single project associated with an entity.
Args:
client:  The API client used to query W&B.entity:  The entity which owns the project.project:  The name of the project to query.attrs:  The attributes of the project.Returns the path of the project. The path is a list containing the entity and project name.
Returns the URL of the project.
Project.artifacts_typesartifacts_types(per_page=50)
Returns all artifact types associated with this project.
Project.sweepssweeps(per_page=50)
Return a paginated collection of sweeps in this project.
Args:
per_page:  The number of sweeps to fetch per request to the API.Returns:
A Sweeps object, which is an iterable collection of Sweep objects.
wandb.apis.publicW&B Public API for Report objects.
This module provides classes for interacting with W&B reports and managing report-related data.
ReportsReports is a lazy iterator of BetaReport objects.
Args:
client (wandb.apis.internal.Api):  The API client instance to use.project (wandb.sdk.internal.Project):  The project to fetch reports from.name (str, optional):  The name of the report to filter by. If None,  fetches all reports.entity (str, optional):  The entity name for the project. Defaults to  the project entity.per_page (int):  Number of reports to fetch per page (default is 50).Reports.__init____init__(client, project, name=None, entity=None, per_page=50)
Reports.convert_objectsconvert_objects()
Converts GraphQL edges to File objects.
Reports.update_variablesupdate_variables()
Updates the GraphQL query variables for pagination.
BetaReportBetaReport is a class associated with reports created in W&B.
Provides access to report attributes (name, description, user, spec, timestamps) and methods for retrieving associated runs, sections, and for rendering the report as HTML.
Attributes:
id (string):  Unique identifier of the report.display_name (string):  Human-readable display name of the report.name (string):  The name of the report. Use display_name for a more user-friendly name.description (string):  Description of the report.user (User):  Dictionary containing user info (username, email) who  created the report.spec (dict):  The spec of the report.url (string):  The URL of the report.updated_at (string):  Timestamp of last update.created_at (string):  Timestamp when the report was created.BetaReport.__init____init__(client, attrs, entity=None, project=None)
Get the panel sections (groups) from the report.
BetaReport.runsruns(section, per_page=50, only_selected=True)
Get runs associated with a section of the report.
BetaReport.to_htmlto_html(height=1024, hidden=False)
Generate HTML containing an iframe displaying this report.
wandb.apis.publicW&B Public API for Runs.
This module provides classes for interacting with W&B runs and their associated data.
Example:
from wandb.apis.public import Api
# Get runs matching filters
runs = Api().runs(
    path="entity/project", filters={"state": "finished", "config.batch_size": 32}
)
# Access run data
for run in runs:
    print(f"Run: {run.name}")
    print(f"Config: {run.config}")
    print(f"Metrics: {run.summary}")
    # Get history with pandas
    history_df = run.history(keys=["loss", "accuracy"], pandas=True)
    # Work with artifacts
    for artifact in run.logged_artifacts():
        print(f"Artifact: {artifact.name}")
Note:
This module is part of the W&B Public API and provides read/write access to run data. For logging new runs, use the wandb.init() function from the main wandb package.
RunsA lazy iterator of Run objects associated with a project and optional filter.
Runs are retrieved in pages from the W&B server as needed.
This is generally used indirectly using the Api.runs namespace.
Args:
client:  (wandb.apis.public.RetryingClient) The API client to use  for requests.entity:  (str) The entity (username or team) that owns the project.project:  (str) The name of the project to fetch runs from.filters:  (Optional[Dict[str, Any]]) A dictionary of filters to apply  to the runs query.order:  (str) Order can be created_at, heartbeat_at, config.*.value, or summary_metrics.*.  If you prepend order with a + order is ascending (default).  If you prepend order with a - order is descending.  The default order is run.created_at from oldest to newest.per_page:  (int) The number of runs to fetch per request (default is 50).include_sweeps:  (bool) Whether to include sweep information in the  runs. Defaults to True.Examples:
from wandb.apis.public.runs import Runs
from wandb.apis.public import Api
# Get all runs from a project that satisfy the filters
filters = {"state": "finished", "config.optimizer": "adam"}
runs = Api().runs(
   client=api.client,
   entity="entity",
   project="project_name",
   filters=filters,
)
# Iterate over runs and print details
for run in runs:
   print(f"Run name: {run.name}")
   print(f"Run ID: {run.id}")
   print(f"Run URL: {run.url}")
   print(f"Run state: {run.state}")
   print(f"Run config: {run.config}")
   print(f"Run summary: {run.summary}")
   print(f"Run history (samples=5): {run.history(samples=5)}")
   print("----------")
# Get histories for all runs with specific metrics
histories_df = runs.histories(
   samples=100,  # Number of samples per run
   keys=["loss", "accuracy"],  # Metrics to fetch
   x_axis="_step",  # X-axis metric
   format="pandas",  # Return as pandas DataFrame
)
Runs.__init____init__(
    client: 'RetryingClient',
    entity: 'str',
    project: 'str',
    filters: 'dict[str, Any] | None' = None,
    order: 'str' = '+created_at',
    per_page: 'int' = 50,
    include_sweeps: 'bool' = True
)
Runs.historieshistories(
    samples: 'int' = 500,
    keys: 'list[str] | None' = None,
    x_axis: 'str' = '_step',
    format: "Literal['default', 'pandas', 'polars']" = 'default',
    stream: "Literal['default', 'system']" = 'default'
)
Return sampled history metrics for all runs that fit the filters conditions.
Args:
samples:  The number of samples to return per runkeys:  Only return metrics for specific keysx_axis:  Use this metric as the xAxis defaults to _stepformat:  Format to return data in, options are “default”, “pandas”,  “polars”stream:  “default” for metrics, “system” for machine metricsReturns:
pandas.DataFrame:  If format="pandas", returns a pandas.DataFrame  of history metrics.polars.DataFrame:  If format="polars", returns a polars.DataFrame  of history metrics.list of dicts:  If format="default", returns a list of dicts  containing history metrics with a run_id key.RunA single run associated with an entity and project.
Args:
client:  The W&B API client.entity:  The entity associated with the run.project:  The project associated with the run.run_id:  The unique identifier for the run.attrs:  The attributes of the run.include_sweeps:  Whether to include sweeps in the run.Attributes:
tags ([str]):  a list of tags associated with the runurl (str):  the url of this runid (str):  unique identifier for the run (defaults to eight characters)name (str):  the name of the runstate (str):  one of: running, finished, crashed, killed, preempting, preemptedconfig (dict):  a dict of hyperparameters associated with the runcreated_at (str):  ISO timestamp when the run was startedsystem_metrics (dict):  the latest system metrics recorded for the runsummary (dict):  A mutable dict-like property that holds the current summary.  Calling update will persist any changes.project (str):  the project associated with the runentity (str):  the name of the entity associated with the runproject_internal_id (int):  the internal id of the projectuser (str):  the name of the user who created the runpath (str):  Unique identifier [entity]/[project]/[run_id]notes (str):  Notes about the runread_only (boolean):  Whether the run is editablehistory_keys (str):  Keys of the history metrics that have been loggedwith wandb.log({key:  value})metadata (str):  Metadata about the run from wandb-metadata.jsonRun.__init____init__(
    client: 'RetryingClient',
    entity: 'str',
    project: 'str',
    run_id: 'str',
    attrs: 'Mapping | None' = None,
    include_sweeps: 'bool' = True
)
Initialize a Run object.
Run is always initialized by calling api.runs() where api is an instance of wandb.Api.
The entity associated with the run.
The unique identifier for the run.
Returns the last step logged in the run’s history.
Metadata about the run from wandb-metadata.json.
Metadata includes the run’s description, tags, start time, memory usage and more.
The name of the run.
The path of the run. The path is a list containing the entity, project, and run_id.
The state of the run. Can be one of: Finished, Failed, Crashed, or Running.
The unique storage identifier for the run.
A mutable dict-like property that holds summary values associated with the run.
The URL of the run.
The run URL is generated from the entity, project, and run_id. For SaaS users, it takes the form of https://wandb.ai/entity/project/run_id.
This API is deprecated. Use entity instead.
Run.createcreate(
    api: 'public.Api',
    run_id: 'str | None' = None,
    project: 'str | None' = None,
    entity: 'str | None' = None,
    state: "Literal['running', 'pending']" = 'running'
)
Create a run for the given project.
Run.deletedelete(delete_artifacts=False)
Delete the given run from the wandb backend.
Args:
delete_artifacts (bool, optional):  Whether to delete the artifacts  associated with the run.Run.filefile(name)
Return the path of a file with a given name in the artifact.
Args:
name (str):  name of requested file.Returns:
A File matching the name argument.
Run.filesfiles(
    names: 'list[str] | None' = None,
    pattern: 'str | None' = None,
    per_page: 'int' = 50
)
Returns a Files object for all files in the run which match the given criteria.
You can specify a list of exact file names to match, or a pattern to match against. If both are provided, the pattern will be ignored.
Args:
names (list):  names of the requested files, if empty returns all filespattern (str, optional):  Pattern to match when returning files from W&B.  This pattern uses mySQL’s LIKE syntax,  so matching all files that end with .json would be “%.json”.  If both names and pattern are provided, a ValueError will be raised.per_page (int):  number of results per page.Returns:
A Files object, which is an iterator over File objects.
Run.historyhistory(samples=500, keys=None, x_axis='_step', pandas=True, stream='default')
Return sampled history metrics for a run.
This is simpler and faster if you are ok with the history records being sampled.
Args:
samples :  (int, optional) The number of samples to returnpandas :  (bool, optional) Return a pandas dataframekeys :  (list, optional) Only return metrics for specific keysx_axis :  (str, optional) Use this metric as the xAxis defaults to _stepstream :  (str, optional) “default” for metrics, “system” for machine metricsReturns:
pandas.DataFrame:  If pandas=True returns a pandas.DataFrame of history  metrics.list of dicts:  If pandas=False returns a list of dicts of history metrics.Run.loadload(force=False)
Run.log_artifactlog_artifact(
    artifact: 'wandb.Artifact',
    aliases: 'Collection[str] | None' = None,
    tags: 'Collection[str] | None' = None
)
Declare an artifact as output of a run.
Args:
artifact (Artifact):  An artifact returned from  wandb.Api().artifact(name).aliases (list, optional):  Aliases to apply to this artifact.tags:  (list, optional) Tags to apply to this artifact, if any.Returns:
A Artifact object.
Run.logged_artifactslogged_artifacts(per_page: 'int' = 100) → public.RunArtifacts
Fetches all artifacts logged by this run.
Retrieves all output artifacts that were logged during the run. Returns a paginated result that can be iterated over or collected into a single list.
Args:
per_page:  Number of artifacts to fetch per API request.Returns: An iterable collection of all Artifact objects logged as outputs during this run.
Example:
import wandb
import tempfile
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as tmp:
   tmp.write("This is a test artifact")
   tmp_path = tmp.name
run = wandb.init(project="artifact-example")
artifact = wandb.Artifact("test_artifact", type="dataset")
artifact.add_file(tmp_path)
run.log_artifact(artifact)
run.finish()
api = wandb.Api()
finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
for logged_artifact in finished_run.logged_artifacts():
   print(logged_artifact.name)
Run.savesave()
Persist changes to the run object to the W&B backend.
Run.scan_historyscan_history(keys=None, page_size=1000, min_step=None, max_step=None)
Returns an iterable collection of all history records for a run.
Args:
keys ([str], optional):  only fetch these keys, and only fetch rows that have all of keys defined.page_size (int, optional):  size of pages to fetch from the api.min_step (int, optional):  the minimum number of pages to scan at a time.max_step (int, optional):  the maximum number of pages to scan at a time.Returns: An iterable collection over history records (dict).
Example: Export all the loss values for an example run
run = api.run("entity/project-name/run-id")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]
Run.to_htmlto_html(height=420, hidden=False)
Generate HTML containing an iframe displaying this run.
Run.updateupdate()
Persist changes to the run object to the wandb backend.
Run.upload_fileupload_file(path, root='.')
Upload a local file to W&B, associating it with this run.
Args:
path (str):  Path to the file to upload. Can be absolute or relative.root (str):  The root path to save the file relative to. For example,  if you want to have the file saved in the run as “my_dir/file.txt”  and you’re currently in “my_dir” you would set root to “../”.  Defaults to current directory (".").Returns:
A File object representing the uploaded file.
Run.use_artifactuse_artifact(artifact, use_as=None)
Declare an artifact as an input to a run.
Args:
artifact (Artifact):  An artifact returned from  wandb.Api().artifact(name)use_as (string, optional):  A string identifying  how the artifact is used in the script. Used  to easily differentiate artifacts used in a  run, when using the beta wandb launch  feature’s artifact swapping functionality.Returns:
An Artifact object.
Run.used_artifactsused_artifacts(per_page: 'int' = 100) → public.RunArtifacts
Fetches artifacts explicitly used by this run.
Retrieves only the input artifacts that were explicitly declared as used during the run, typically via run.use_artifact(). Returns a paginated result that can be iterated over or collected into a single list.
Args:
per_page:  Number of artifacts to fetch per API request.Returns: An iterable collection of Artifact objects explicitly used as inputs in this run.
Example:
import wandb
run = wandb.init(project="artifact-example")
run.use_artifact("test_artifact:latest")
run.finish()
api = wandb.Api()
finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
for used_artifact in finished_run.used_artifacts():
   print(used_artifact.name)
test_artifact
Run.wait_until_finishedwait_until_finished()
Check the state of the run until it is finished.
wandb.apis.publicW&B Public API for Sweeps.
This module provides classes for interacting with W&B hyperparameter optimization sweeps.
Example:
from wandb.apis.public import Api
# Get a specific sweep
sweep = Api().sweep("entity/project/sweep_id")
# Access sweep properties
print(f"Sweep: {sweep.name}")
print(f"State: {sweep.state}")
print(f"Best Loss: {sweep.best_loss}")
# Get best performing run
best_run = sweep.best_run()
print(f"Best Run: {best_run.name}")
print(f"Metrics: {best_run.summary}")
Note:
This module is part of the W&B Public API and provides read-only access to sweep data. For creating and controlling sweeps, use the wandb.sweep() and wandb.agent() functions from the main wandb package.
SweepsA lazy iterator over a collection of Sweep objects.
Examples:
from wandb.apis.public import Api
sweeps = Api().project(name="project_name", entity="entity").sweeps()
# Iterate over sweeps and print details
for sweep in sweeps:
    print(f"Sweep name: {sweep.name}")
    print(f"Sweep ID: {sweep.id}")
    print(f"Sweep URL: {sweep.url}")
    print("----------")
Sweeps.__init____init__(
    client: wandb.apis.public.api.RetryingClient,
    entity: str,
    project: str,
    per_page: int = 50
) → Sweeps
An iterable collection of Sweep objects.
Args:
client:  The API client used to query W&B.entity:  The entity which owns the sweeps.project:  The project which contains the sweeps.per_page:  The number of sweeps to fetch per request to the API.SweepThe set of runs associated with the sweep.
Attributes:
runs (Runs):  List of runsid (str):  Sweep IDproject (str):  The name of the project the sweep belongs toconfig (dict):  Dictionary containing the sweep configurationstate (str):  The state of the sweep. Can be “Finished”, “Failed”,  “Crashed”, or “Running”.expected_run_count (int):  The number of expected runs for the sweepSweep.__init____init__(client, entity, project, sweep_id, attrs=None)
The sweep configuration used for the sweep.
The entity associated with the sweep.
Return the number of expected runs in the sweep or None for infinite runs.
The name of the sweep.
Returns the first name that exists in the following priority order:
Return the order key for the sweep.
Returns the path of the project.
The path is a list containing the entity, project name, and sweep ID.
The URL of the sweep.
The sweep URL is generated from the entity, project, the term “sweeps”, and the sweep ID.run_id. For SaaS users, it takes the form of https://wandb.ai/entity/project/sweeps/sweeps_ID.
Deprecated. Use Sweep.entity instead.
Sweep.best_runbest_run(order=None)
Return the best run sorted by the metric defined in config or the order passed in.
Sweep.getget(
    client: 'RetryingClient',
    entity: Optional[str] = None,
    project: Optional[str] = None,
    sid: Optional[str] = None,
    order: Optional[str] = None,
    query: Optional[str] = None,
    **kwargs
)
Execute a query against the cloud backend.
Args:
client:  The client to use to execute the query.entity:  The entity (username or team) that owns the project.project:  The name of the project to fetch sweep from.sid:  The sweep ID to query.order:  The order in which the sweep’s runs are returned.query:  The query to use to execute the query.**kwargs:  Additional keyword arguments to pass to the query.Sweep.to_htmlto_html(height=420, hidden=False)
Generate HTML containing an iframe displaying this sweep.
wandb.apis.publicW&B Public API for managing teams and team members.
This module provides classes for managing W&B teams and their members.
Note:
This module is part of the W&B Public API and provides methods to manage teams and their members. Team management operations require appropriate permissions.
MemberA member of a team.
Args:
client (wandb.apis.internal.Api):  The client instance to useteam (str):  The name of the team this member belongs toattrs (dict):  The member attributesMember.__init____init__(client, team, attrs)
Member.deletedelete()
Remove a member from a team.
Returns: Boolean indicating success
TeamA class that represents a W&B team.
This class provides methods to manage W&B teams, including creating teams, inviting members, and managing service accounts. It inherits from Attrs to handle team attributes.
Args:
client (wandb.apis.public.Api):  The api instance to usename (str):  The name of the teamattrs (dict):  Optional dictionary of team attributesNote:
Team management requires appropriate permissions.
Team.__init____init__(client, name, attrs=None)
Team.createcreate(api, team, admin_username=None)
Create a new team.
Args:
api:  (Api) The api instance to useteam:  (str) The name of the teamadmin_username:  (str) optional username of the admin user of the team, defaults to the current user.Returns:
A Team object
Team.create_service_accountcreate_service_account(description)
Create a service account for the team.
Args:
description:  (str) A description for this service accountReturns:
The service account Member object, or None on failure
Team.inviteinvite(username_or_email, admin=False)
Invite a user to a team.
Args:
username_or_email:  (str) The username or email address of the user  you want to invite.admin:  (bool) Whether to make this user a team admin.  Defaults to False.Returns:
True on success, False if user was already invited or didn’t exist.
wandb.apis.publicW&B Public API for managing users and API keys.
This module provides classes for managing W&B users and their API keys.
Note:
This module is part of the W&B Public API and provides methods to manage users and their authentication. Some operations require admin privileges.
UserA class representing a W&B user with authentication and management capabilities.
This class provides methods to manage W&B users, including creating users, managing API keys, and accessing team memberships. It inherits from Attrs to handle user attributes.
Args:
client:  (wandb.apis.internal.Api) The client instance to useattrs:  (dict) The user attributesNote:
Some operations require admin privileges
User.__init____init__(client, attrs)
List of API key names associated with the user.
Returns:
list[str]:  Names of API keys associated with the user. Empty list if user  has no API keys or if API key data hasn’t been loaded.List of team names that the user is a member of.
Returns:
list (list):  Names of teams the user belongs to. Empty list if user has no  team memberships or if teams data hasn’t been loaded.An instance of the api using credentials from the user.
User.createcreate(api, email, admin=False)
Create a new user.
Args:
api (Api):  The api instance to useemail (str):  The name of the teamadmin (bool):  Whether this user should be a global instance adminReturns:
A User object
User.delete_api_keydelete_api_key(api_key)
Delete a user’s api key.
Args:
api_key (str):  The name of the API key to delete. This should be  one of the names returned by the api_keys property.Returns: Boolean indicating success
Raises: ValueError if the api_key couldn’t be found
User.generate_api_keygenerate_api_key(description=None)
Generate a new api key.
Args:
description (str, optional):  A description for the new API key. This can be  used to identify the purpose of the API key.Returns: The new api key, or None on failure
Automate your W&B workflows.
A local instance of a saved W&B automation.
Attributes:
Defines an automation action that intentionally does nothing.
Attributes:
Defines a filter that compares a change in a run metric against a user-defined threshold.
The change is calculated over “tumbling” windows, i.e. the difference between the current window and the non-overlapping prior window.
Attributes:
agg is None).
If omitted, defaults to the size of the current window.agg is None).Defines a filter that compares a run metric against a user-defined threshold value.
Attributes:
agg is None).A new automation to be created.
Attributes:
A new alias is assigned to an artifact.
Attributes:
thenthen(self, action: 'InputAction') -> 'NewAutomation'
Define a new Automation in which this event triggers the given action.
A new artifact is created.
Attributes:
thenthen(self, action: 'InputAction') -> 'NewAutomation'
Define a new Automation in which this event triggers the given action.
A new artifact is linked to a collection.
Attributes:
thenthen(self, action: 'InputAction') -> 'NewAutomation'
Define a new Automation in which this event triggers the given action.
A run metric satisfies a user-defined condition.
Attributes:
thenthen(self, action: 'InputAction') -> 'NewAutomation'
Define a new Automation in which this event triggers the given action.
Defines an automation action that sends a (Slack) notification.
Attributes:
INFO, WARN, ERROR) of the sent notification.from_integrationfrom_integration(cls, integration: 'SlackIntegration', *, title: 'str' = '', text: 'str' = '', level: 'AlertSeverity' = <AlertSeverity.INFO: 'INFO'>) -> 'Self'
Define a notification action that sends to the given (Slack) integration.
Defines an automation action that sends a webhook request.
Attributes:
from_integrationfrom_integration(cls, integration: 'WebhookIntegration', *, payload: 'Optional[SerializedToJson[dict[str, Any]]]' = None) -> 'Self'
Define a webhook action that sends to the given (webhook) integration.
class reports: Python library for programmatically working with W&B Reports API.
class workspaces: Python library for programmatically working with W&B Workspace API.
wandb_workspaces.reports.v2Python library for programmatically working with W&B Reports API.
import wandb_workspaces.reports.v2 as wr
report = wr.Report(
     entity="entity",
     project="project",
     title="An amazing title",
     description="A descriptive description.",
)
blocks = [
     wr.PanelGrid(
         panels=[
             wr.LinePlot(x="time", y="velocity"),
             wr.ScatterPlot(x="time", y="acceleration"),
         ]
     )
]
report.blocks = blocks
report.save()
BarPlotA panel object that shows a 2D bar plot.
Attributes:
title (Optional[str]): The text that appears at the top of the plot.metrics (LList[MetricType]): orientation Literal[“v”, “h”]: The orientation of the bar plot. Set to either vertical (“v”) or horizontal (“h”). Defaults to horizontal (“h”).range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.title_x (Optional[str]): The label of the x-axis.title_y (Optional[str]): The label of the y-axis.groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, =samples, or None.max_runs_to_show (Optional[int]): The maximum number of runs to show on the plot.max_bars_to_show (Optional[int]): The maximum number of bars to show on the bar plot.custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the bar plot.legend_template (Optional[str]): The template for the legend.font_size ( Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.line_titles (Optional[dict]): The titles of the lines. The keys are the line names and the values are the titles.line_colors (Optional[dict]): The colors of the lines. The keys are the line names and the values are the colors.BlockQuoteA block of quoted text.
Attributes:
text (str): The text of the block quote.CalloutBlockA block of callout text.
Attributes:
text (str): The callout text.CheckedListA list of items with checkboxes. Add one or more CheckedListItem within CheckedList.
Attributes:
items (LList[CheckedListItem]): A list of one or more CheckedListItem objects.CheckedListItemA list item with a checkbox. Add one or more CheckedListItem within CheckedList.
Attributes:
text (str): The text of the list item.checked (bool): Whether the checkbox is checked. By default, set to False.CodeBlockA block of code.
Attributes:
code (str): The code in the block.language (Optional[Language]): The language of the code. Language specified is used for syntax highlighting. By default, set to python. Options include javascript, python, css, json, html, markdown, yaml.CodeComparerA panel object that compares the code between two different runs.
Attributes:
diff (Literal['split', 'unified']): How to display code differences. Options include split and unified.ConfigMetrics logged to a run’s config object. Config objects are commonly logged using wandb.Run.config[name] = ... or passing a config as a dictionary of key-value pairs, where the key is the name of the metric and the value is the value of that metric.
Attributes:
name (str): The name of the metric.CustomChartA panel that shows a custom chart. The chart is defined by a weave query.
Attributes:
query (dict): The query that defines the custom chart. The key is the name of the field, and the value is the query.chart_name (str): The title of the custom chart.chart_fields (dict): Key-value pairs that define the axis of the plot. Where the key is the label, and the value is the metric.chart_strings (dict): Key-value pairs that define the strings in the chart.from_tablefrom_table(
    table_name: str,
    chart_fields: dict = None,
    chart_strings: dict = None
)
Create a custom chart from a table.
Arguments:
table_name (str): The name of the table.chart_fields (dict): The fields to display in the chart.chart_strings (dict): The strings to display in the chart.GalleryA block that renders a gallery of reports and URLs.
Attributes:
items (List[Union[GalleryReport, GalleryURL]]): A list of GalleryReport and GalleryURL objects.GalleryReportA reference to a report in the gallery.
Attributes:
report_id (str): The ID of the report.GalleryURLA URL to an external resource.
Attributes:
url (str): The URL of the resource.title (Optional[str]): The title of the resource.description (Optional[str]): The description of the resource.image_url (Optional[str]): The URL of an image to display.GradientPointA point in a gradient.
Attributes:
color: The color of the point.offset: The position of the point in the gradient. The value should be between 0 and 100.H1An H1 heading with the text specified.
Attributes:
text (str): The text of the heading.collapsed_blocks (Optional[LList[“BlockTypes”]]): The blocks to show when the heading is collapsed.H2An H2 heading with the text specified.
Attributes:
text (str): The text of the heading.collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.H3An H3 heading with the text specified.
Attributes:
text (str): The text of the heading.collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.HeadingHorizontalRuleHTML horizontal line.
ImageA block that renders an image.
Attributes:
url (str): The URL of the image.caption (str): The caption of the image. Caption appears underneath the image.InlineCodeInline code. Does not add newline character after code.
Attributes:
text (str): The code you want to appear in the report.InlineLatexInline LaTeX markdown. Does not add newline character after the LaTeX markdown.
Attributes:
text (str): LaTeX markdown you want to appear in the report.LatexBlockA block of LaTeX text.
Attributes:
text (str): The LaTeX text.LayoutThe layout of a panel in a report. Adjusts the size and position of the panel.
Attributes:
x (int): The x position of the panel.y (int): The y position of the panel.w (int): The width of the panel.h (int): The height of the panel.LinePlotA panel object with 2D line plots.
Attributes:
title (Optional[str]): The text that appears at the top of the plot.x (Optional[MetricType]): The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.y (LList[MetricType]): One or more metrics logged to your W&B project that the report pulls information from. The metric specified is used for the y-axis.range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.title_x (Optional[str]): The label of the x-axis.title_y (Optional[str]): The label of the y-axis.ignore_outliers (Optional[bool]): If set to True, do not plot outliers.groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.smoothing_factor (Optional[float]): The smoothing factor to apply to the smoothing type. Accepted values range between 0 and 1.smoothing_type Optional[SmoothingType]: Apply a filter based on the specified distribution. Options include exponentialTimeWeighted, exponential, gaussian, average, or none.smoothing_show_original (Optional[bool]): If set to True, show the original data.max_runs_to_show (Optional[int]): The maximum number of runs to show on the line plot.custom_expressions (Optional[LList[str]]): Custom expressions to apply to the data.plot_type Optional[LinePlotStyle]: The type of line plot to generate. Options include line, stacked-area, or pct-area.font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.legend_position Optional[LegendPosition]: Where to place the legend. Options include north, south, east, west, or None.legend_template (Optional[str]): The template for the legend.aggregate (Optional[bool]): If set to True, aggregate the data.xaxis_expression (Optional[str]): The expression for the x-axis.legend_fields (Optional[LList[str]]): The fields to include in the legend.LinkA link to a URL.
Attributes:
text (Union[str, TextWithInlineComments]): The text of the link.url (str): The URL the link points to.MarkdownBlockA block of markdown text. Useful if you want to write text that uses common markdown syntax.
Attributes:
text (str): The markdown text.MarkdownPanelA panel that renders markdown.
Attributes:
markdown (str): The text you want to appear in the markdown panel.MediaBrowserA panel that displays media files in a grid layout.
Attributes:
num_columns (Optional[int]): The number of columns in the grid.media_keys (LList[str]): A list of media keys that correspond to the media files.MetricA metric to display in a report that is logged in your project.
Attributes:
name (str): The name of the metric.OrderByA metric to order by.
Attributes:
name (str): The name of the metric.ascending (bool): Whether to sort in ascending order. By default set to False.OrderedListA list of items in a numbered list.
Attributes:
items (LList[str]): A list of one or more OrderedListItem objects.OrderedListItemA list item in an ordered list.
Attributes:
text (str): The text of the list item.PA paragraph of text.
Attributes:
text (str): The text of the paragraph.PanelA panel that displays a visualization in a panel grid.
Attributes:
layout (Layout): A Layout object.PanelGridA grid that consists of runsets and panels. Add runsets and panels with Runset and Panel objects, respectively.
Available panels include: LinePlot, ScatterPlot, BarPlot, ScalarChart, CodeComparer, ParallelCoordinatesPlot, ParameterImportancePlot, RunComparer, MediaBrowser, MarkdownPanel, CustomChart, WeavePanel, WeavePanelSummaryTable, WeavePanelArtifactVersionedFile.
Attributes:
runsets (LList[“Runset”]): A list of one or more Runset objects.panels (LList[“PanelTypes”]): A list of one or more Panel objects.active_runset (int): The number of runs you want to display within a runset. By default, it is set to 0.custom_run_colors (dict): Key-value pairs where the key is the name of a run and the value is a color specified by a hexadecimal value.ParallelCoordinatesPlotA panel object that shows a parallel coordinates plot.
Attributes:
columns (LList[ParallelCoordinatesPlotColumn]): A list of one or more ParallelCoordinatesPlotColumn objects.title (Optional[str]): The text that appears at the top of the plot.gradient (Optional[LList[GradientPoint]]): A list of gradient points.font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.ParallelCoordinatesPlotColumnA column within a parallel coordinates plot. The order of metrics specified determine the order of the parallel axis (x-axis) in the parallel coordinates plot.
Attributes:
metric (str | Config | SummaryMetric): The name of the metric logged to your W&B project that the report pulls information from.display_name (Optional[str]): The name of the metricinverted (Optional[bool]): Whether to invert the metric.log (Optional[bool]): Whether to apply a log transformation to the metric.ParameterImportancePlotA panel that shows how important each hyperparameter is in predicting the chosen metric.
Attributes:
with_respect_to (str): The metric you want to compare the parameter importance against. Common metrics might include the loss, accuracy, and so forth. The metric you specify must be logged within the project that the report pulls information from.ReportAn object that represents a W&B Report. Use the returned object’s blocks attribute to customize your report. Report objects do not automatically save. Use the save() method to persists changes.
Attributes:
project (str): The name of the W&B project you want to load in. The project specified appears in the report’s URL.entity (str): The W&B entity that owns the report. The entity appears in the report’s URL.title (str): The title of the report. The title appears at the top of the report as an H1 heading.description (str): A description of the report. The description appears underneath the report’s title.blocks (LList[BlockTypes]): A list of one or more HTML tags, plots, grids, runsets, and more.width (Literal[‘readable’, ‘fixed’, ‘fluid’]): The width of the report. Options include ‘readable’, ‘fixed’, ‘fluid’.The URL where the report is hosted. The report URL consists of https://wandb.ai/{entity}/{project_name}/reports/. Where {entity} and {project_name} consists of the entity that the report belongs to and the name of the project, respectively.
from_urlfrom_url(url: str, as_model: bool = False)
Load in the report into current environment. Pass in the URL where the report is hosted.
Arguments:
url (str): The URL where the report is hosted.as_model (bool): If True, return the model object instead of the Report object. By default, set to False.savesave(draft: bool = False, clone: bool = False)
Persists changes made to a report object.
to_htmlto_html(height: int = 1024, hidden: bool = False) → str
Generate HTML containing an iframe displaying this report. Commonly used to within a Python notebook.
Arguments:
height (int): Height of the iframe.hidden (bool): If True, hide the iframe. Default set to False.RunComparerA panel that compares metrics across different runs from the project the report pulls information from.
Attributes:
diff_only (Optional[Literal["split", True]]): Display only the difference across runs in a project. You can toggle this feature on and off in the W&B Report UI.RunsetA set of runs to display in a panel grid.
Attributes:
entity (str): An entity that owns or has the correct permissions to the project where the runs are stored.project (str): The name of the project were the runs are stored.name (str): The name of the run set. Set to Run set by default.query (str): A query string to filter runs.filters (Optional[str]): A filter string to filter runs.groupby (LList[str]): A list of metric names to group by.order (LList[OrderBy]): A list of OrderBy objects to order by.custom_run_colors (LList[OrderBy]): A dictionary mapping run IDs to colors.RunsetGroupUI element that shows a group of runsets.
Attributes:
runset_name (str): The name of the runset.keys (Tuple[RunsetGroupKey, …]): The keys to group by. Pass in one or more RunsetGroupKey objects to group by.RunsetGroupKeyGroups runsets by a metric type and value. Part of a RunsetGroup. Specify the metric type and value to group by as key-value pairs.
Attributes:
key (Type[str] | Type[Config] | Type[SummaryMetric] | Type[Metric]): The metric type to group by.value (str): The value of the metric to group by.ScalarChartA panel object that shows a scalar chart.
Attributes:
title (Optional[str]): The text that appears at the top of the plot.metric (MetricType): The name of a metric logged to your W&B project that the report pulls information from.groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the scalar chart.legend_template (Optional[str]): The template for the legend.font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.ScatterPlotA panel object that shows a 2D or 3D scatter plot.
Arguments:
title (Optional[str]): The text that appears at the top of the plot.x Optional[SummaryOrConfigOnlyMetric]: The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.y Optional[SummaryOrConfigOnlyMetric]: One or more metrics logged to your W&B project that the report pulls information from. Metrics specified are plotted within the y-axis. z Optional[SummaryOrConfigOnlyMetric]:range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.range_z (Tuple[float | None, float | None]): Tuple that specifies the range of the z-axis.log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.log_z (Optional[bool]): Plots the z-coordinates using a base-10 logarithmic scale.running_ymin (Optional[bool]): Apply a moving average or rolling mean.running_ymax (Optional[bool]): Apply a moving average or rolling mean.running_ymean (Optional[bool]): Apply a moving average or rolling mean.legend_template (Optional[str]): A string that specifies the format of the legend.gradient (Optional[LList[GradientPoint]]): A list of gradient points that specify the color gradient of the plot.font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.regression (Optional[bool]): If True, a regression line is plotted on the scatter plot.SoundCloudA block that renders a SoundCloud player.
Attributes:
html (str): The HTML code to embed the SoundCloud player.SpotifyA block that renders a Spotify player.
Attributes:
spotify_id (str): The Spotify ID of the track or playlist.SummaryMetricA summary metric to display in a report.
Attributes:
name (str): The name of the metric.TableOfContentsA block that contains a list of sections and subsections using H1, H2, and H3 HTML blocks specified in a report.
TextWithInlineCommentsA block of text with inline comments.
Attributes:
text (str): The text of the block.TwitterA block that displays a Twitter feed.
Attributes:
html (str): The HTML code to display the Twitter feed.UnorderedListA list of items in a bulleted list.
Attributes:
items (LList[str]): A list of one or more UnorderedListItem objects.UnorderedListItemA list item in an unordered list.
Attributes:
text (str): The text of the list item.VideoA block that renders a video.
Attributes:
url (str): The URL of the video.WeaveBlockArtifactA block that shows an artifact logged to W&B. The query takes the form of
project('entity', 'project').artifact('artifact-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity (str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.project (str): The project where the artifact is stored.artifact (str): The name of the artifact to retrieve.tab Literal["overview", "metadata", "usage", "files", "lineage"]: The tab to display in the artifact panel.WeaveBlockArtifactVersionedFileA block that shows a versioned file logged to a W&B artifact. The query takes the form of
project('entity', 'project').artifactVersion('name', 'version').file('file-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity (str): The entity that owns or has the appropriate permissions to the project where the artifact is stored.project (str): The project where the artifact is stored.artifact (str): The name of the artifact to retrieve.version (str): The version of the artifact to retrieve.file (str): The name of the file stored in the artifact to retrieve.WeaveBlockSummaryTableA block that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of
project('entity', 'project').runs.summary['value']
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
entity (str): The entity that owns or has the appropriate permissions to the project where the values are logged.project (str): The project where the value is logged in.table_name (str): The name of the table, DataFrame, plot, or value.WeavePanelAn empty query panel that can be used to display custom content using queries.
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
WeavePanelArtifactA panel that shows an artifact logged to W&B.
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
artifact (str): The name of the artifact to retrieve.tab Literal["overview", "metadata", "usage", "files", "lineage"]: The tab to display in the artifact panel.WeavePanelArtifactVersionedFileA panel that shows a versioned file logged to a W&B artifact.
project('entity', 'project').artifactVersion('name', 'version').file('file-name')
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
artifact (str): The name of the artifact to retrieve.version (str): The version of the artifact to retrieve.file (str): The name of the file stored in the artifact to retrieve.WeavePanelSummaryTableA panel that shows a W&B Table, pandas DataFrame, plot, or other value logged to W&B. The query takes the form of
runs.summary['value']
The term “Weave” in the API name does not refer to the W&B Weave toolkit used for tracking and evaluating LLM.
Attributes:
table_name (str): The name of the table, DataFrame, plot, or value.wandb_workspaces.workspacesPython library for programmatically working with W&B Workspace API.
# How to import
import wandb_workspaces.workspaces as ws
# Example of creating a workspace
ws.Workspace(
     name="Example W&B Workspace",
     entity="entity", # entity that owns the workspace
     project="project", # project that the workspace is associated with
     sections=[
         ws.Section(
             name="Validation Metrics",
             panels=[
                 wr.LinePlot(x="Step", y=["val_loss"]),
                 wr.BarPlot(metrics=["val_accuracy"]),
                 wr.ScalarChart(metric="f1_score", groupby_aggfunc="mean"),
             ],
             is_open=True,
         ),
     ],
)
workspace.save()
RunSettingsSettings for a run in a runset (left hand bar).
Attributes:
color (str): The color of the run in the UI. Can be hex (#ff0000), css color (red), or rgb (rgb(255, 0, 0))disabled (bool): Whether the run is deactivated (eye closed in the UI). Default is set to False.RunsetSettingsSettings for the runset (the left bar containing runs) in a workspace.
Attributes:
query (str): A query to filter the runset (can be a regex expr, see next param).regex_query (bool): Controls whether the query (above) is a regex expr. Default is set to False.filters (LList[expr.FilterExpr]): A list of filters to apply to the runset. Filters are AND’d together. See FilterExpr for more information on creating filters.groupby (LList[expr.MetricType]): A list of metrics to group by in the runset. Set to Metric, Summary, Config, Tags, or KeysInfo.order (LList[expr.Ordering]): A list of metrics and ordering to apply to the runset.run_settings (Dict[str, RunSettings]): A dictionary of run settings, where the key is the run’s ID and the value is a RunSettings object.SectionRepresents a section in a workspace.
Attributes:
name (str): The name/title of the section.panels (LList[PanelTypes]): An ordered list of panels in the section. By default, first is top-left and last is bottom-right.is_open (bool): Whether the section is open or closed. Default is closed.layout_settings (Literal[standard, custom]): Settings for panel layout in the section.panel_settings: Panel-level settings applied to all panels in the section, similar to WorkspaceSettings for a Section.SectionLayoutSettingsPanel layout settings for a section, typically seen at the top right of the section of the W&B App Workspace UI.
Attributes:
layout (Literal[standard, custom]): The layout of panels in the section. standard follows the default grid layout, custom allows per per-panel layouts controlled by the individual panel settings.columns (int): In a standard layout, the number of columns in the layout. Default is 3.rows (int): In a standard layout, the number of rows in the layout. Default is 2.SectionPanelSettingsPanel settings for a section, similar to WorkspaceSettings for a section.
Settings applied here can be overrided by more granular Panel settings in this priority: Section < Panel.
Attributes:
x_axis (str): X-axis metric name setting. By default, set to Step.x_min Optional[float]: Minimum value for the x-axis.x_max Optional[float]: Maximum value for the x-axis.smoothing_type (Literal[’exponentialTimeWeighted’, ’exponential’, ‘gaussian’, ‘average’, ’none’]): Smoothing type applied to all panels.smoothing_weight (int): Smoothing weight applied to all panels.WorkspaceRepresents a W&B workspace, including sections, settings, and config for run sets.
Attributes:
entity (str): The entity this workspace will be saved to (usually user or team name).project (str): The project this workspace will be saved to.name: The name of the workspace.sections (LList[Section]): An ordered list of sections in the workspace. The first section is at the top of the workspace.settings (WorkspaceSettings): Settings for the workspace, typically seen at the top of the workspace in the UI.runset_settings (RunsetSettings): Settings for the runset (the left bar containing runs) in a workspace.The URL to the workspace in the W&B app.
from_urlfrom_url(url: str)
Get a workspace from a URL.
savesave()
Save the current workspace to W&B.
Returns:
Workspace: The updated workspace with the saved internal name and ID.save_as_new_viewsave_as_new_view()
Save the current workspace as a new view to W&B.
Returns:
Workspace: The updated workspace with the saved internal name and ID.WorkspaceSettingsSettings for the workspace, typically seen at the top of the workspace in the UI.
This object includes settings for the x-axis, smoothing, outliers, panels, tooltips, runs, and panel query bar.
Settings applied here can be overrided by more granular Section and Panel settings in this priority: Workspace < Section < Panel
Attributes:
x_axis (str): X-axis metric name setting.x_min (Optional[float]): Minimum value for the x-axis.x_max (Optional[float]): Maximum value for the x-axis.smoothing_type (Literal['exponentialTimeWeighted', 'exponential', 'gaussian', 'average', 'none']): Smoothing type applied to all panels.smoothing_weight (int): Smoothing weight applied to all panels.ignore_outliers (bool): Ignore outliers in all panels.sort_panels_alphabetically (bool): Sorts panels in all sections alphabetically.group_by_prefix (Literal[first, last]): Group panels by the first or up to last prefix (first or last). Default is set to last.remove_legends_from_panels (bool): Remove legends from all panels.tooltip_number_of_runs (Literal[default, all, none]): The number of runs to show in the tooltip.tooltip_color_run_names (bool): Whether to color run names in the tooltip to match the runset (True) or not (False). Default is set to True.max_runs (int): The maximum number of runs to show per panel (this will be the first 10 runs in the runset).point_visualization_method (Literal[line, point, line_point]): The visualization method for points.panel_search_query (str): The query for the panel search bar (can be a regex expression).auto_expand_panel_search_results (bool): Whether to auto expand the panel search results.