Skip to content

core ¤

Asset dataclass ¤

Asset(
    rid: str,
    name: str,
    description: str | None,
    properties: Mapping[str, str],
    labels: Sequence[str],
    _clients: _Clients,
)

Bases: HasRid

nominal_url property ¤

nominal_url: str

Returns a link to the page for this Asset in the Nominal app

add_attachments ¤

add_attachments(
    attachments: Iterable[Attachment] | Iterable[str],
) -> None

Add attachments that have already been uploaded to this asset.

attachments can be Attachment instances, or attachment RIDs.

add_connection ¤

add_connection(
    data_scope_name: str,
    connection: Connection | str,
    *,
    series_tags: dict[str, str] | None = None
) -> None

Add a connection to this asset.

Data_scope_name maps "data scope name" (the name within the asset) to a Connection (or connection rid). The same type of connection should use the same data scope name across assets, since checklists and templates use data scope names to reference connections.

add_dataset ¤

add_dataset(data_scope_name: str, dataset: Dataset | str) -> None

Add a dataset to this asset.

Datasets map "data_scope_name" (their name within the asset) to a Dataset (or dataset rid). The same type of datasets should use the same data scope name across assets, since checklists and templates use data scope names to reference datasets.

add_log_set ¤

add_log_set(data_scope_name: str, log_set: LogSet | str) -> None

Add a log set to this asset.

Log sets map "ref names" (their name within the run) to a Log set (or log set rid).

archive ¤

archive() -> None

Archive this asset. Archived assets are not deleted, but are hidden from the UI.

list_datasets ¤

list_datasets() -> Sequence[tuple[str, Dataset]]

List the datasets associated with this asset. Returns (data_scope_name, dataset) pairs for each dataset.

remove_attachments ¤

remove_attachments(
    attachments: Iterable[Attachment] | Iterable[str],
) -> None

Remove attachments from this asset. Does not remove the attachments from Nominal.

attachments can be Attachment instances, or attachment RIDs.

remove_data_sources ¤

remove_data_sources(
    *,
    data_scope_names: Sequence[str] | None = None,
    data_sources: (
        Sequence[Connection | Dataset | Video | str] | None
    ) = None
) -> None

Remove data sources from this asset.

The list data_sources can contain Connection, Dataset, Video instances, or rids as string.

unarchive ¤

unarchive() -> None

Unarchive this asset, allowing it to be viewed in the UI.

update ¤

update(
    *,
    name: str | None = None,
    description: str | None = None,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] | None = None,
    links: Sequence[str] | Sequence[Link] | None = None
) -> Self

Replace asset metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.

Links can be URLs or tuples of (URL, name).

Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:

new_labels = ["new-label-a", "new-label-b"]
for old_label in asset.labels:
    new_labels.append(old_label)
asset = asset.update(labels=new_labels)

Attachment dataclass ¤

Attachment(
    rid: str,
    name: str,
    description: str,
    properties: Mapping[str, str],
    labels: Sequence[str],
    _clients: _Clients,
)

Bases: HasRid

archive ¤

archive() -> None

Archive this attachment. Archived attachments are not deleted, but are hidden from the UI.

get_contents ¤

get_contents() -> BinaryIO

Retrieve the contents of this attachment. Returns a file-like object in binary mode for reading.

unarchive ¤

unarchive() -> None

Unarchive this attachment, allowing it to be viewed in the UI.

update ¤

update(
    *,
    name: str | None = None,
    description: str | None = None,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] | None = None
) -> Self

Replace attachment metadata. Updates the current instance, and returns it.

Only the metadata passed in will be replaced, the rest will remain untouched.

Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:

new_labels = ["new-label-a", "new-label-b", *attachment.labels]
attachment = attachment.update(labels=new_labels)

write ¤

write(path: Path, mkdir: bool = True) -> None

Write an attachment to the filesystem.

path should be the path you want to save to, i.e. a file, not a directory.

Channel dataclass ¤

Channel(
    rid: str,
    name: str,
    data_source: str,
    data_type: ChannelDataType | None,
    unit: str | None,
    description: str | None,
    _clients: _Clients,
)

Bases: HasRid

Metadata for working with channels.

get_decimated ¤

get_decimated(
    start: str | datetime | IntegralNanosecondsUTC,
    end: str | datetime | IntegralNanosecondsUTC,
    *,
    buckets: int | None = None,
    resolution: int | None = None
) -> DataFrame

Retrieve the channel data as a pandas.DataFrame, decimated to the given buckets or resolution.

Enter either the number of buckets or the resolution for the output. Resolution in picoseconds for picosecond-granularity dataset, nanoseconds otherwise.

to_pandas ¤

to_pandas(
    start: datetime | IntegralNanosecondsUTC | None = None,
    end: datetime | IntegralNanosecondsUTC | None = None,
) -> Series[Any]

Retrieve the channel data as a pandas.Series.

The index of the series is the timestamp of the data. The index name is "timestamp" and the series name is the channel name.

Example:¤
s = channel.to_pandas()
print(s.name, "mean:", s.mean())

Checklist dataclass ¤

Checklist(
    rid: str,
    name: str,
    description: str,
    properties: Mapping[str, str],
    labels: Sequence[str],
    checklist_variables: Sequence[ChecklistVariable],
    checks: Sequence[Check],
    _clients: _Clients,
)

Bases: HasRid

archive ¤

archive() -> None

Archive this checklist. Archived checklists are not deleted, but are hidden from the UI.

execute_streaming ¤

execute_streaming(
    assets: Sequence[Asset | str],
    integration_rids: Sequence[str],
    *,
    evaluation_delay: timedelta = timedelta(),
    recovery_delay: timedelta = timedelta(seconds=15)
) -> None

Execute the checklist for the given assets. - assets: Can be Asset instances, or Asset RIDs. - integration_rids: Checklist violations will be sent to the specified integrations. At least one integration must be specified. See https://app.gov.nominal.io/settings/integrations for a list of available integrations. - evaluation_delay: Delays the evaluation of the streaming checklist. This is useful for when data is delayed. - recovery_delay: Specifies the minimum amount of time that must pass before a check can recover from a failure. Minimum value is 15 seconds.

reload_streaming ¤

reload_streaming() -> None

Reload the checklist.

stop_streaming ¤

stop_streaming() -> None

Stop the checklist.

stop_streaming_for_assets ¤

stop_streaming_for_assets(assets: Sequence[Asset | str]) -> None

Stop the checklist for the given assets.

unarchive ¤

unarchive() -> None

Unarchive this checklist, allowing it to be viewed in the UI.

Connection dataclass ¤

Connection(
    rid: str,
    name: str,
    description: str | None,
    _tags: Mapping[str, Sequence[str]],
    _clients: _Clients,
)

Bases: HasRid

archive ¤

archive() -> None

Archive this connection. Archived connections are not deleted, but are hidden from the UI.

get_channel ¤

get_channel(name: str, tags: dict[str, str] | None = None) -> Channel

Retrieve a channel with the given name and tags.

unarchive ¤

unarchive() -> None

Unarchive this connection, making it visible in the UI.

DataReview dataclass ¤

DataReview(
    rid: str,
    run_rid: str,
    checklist_rid: str,
    checklist_commit: str,
    completed: bool,
    _clients: _Clients,
)

Bases: HasRid

nominal_url property ¤

nominal_url: str

Returns a link to the page for this Data Review in the Nominal app

archive ¤

archive() -> None

Archive this data review. Archived data reviews are not deleted, but are hidden from the UI.

NOTE: currently, it is not possible (yet) to unarchive a data review once archived.

get_violations ¤

get_violations() -> Sequence[CheckViolation]

Retrieves the list of check violations for the data review.

poll_for_completion ¤

poll_for_completion(
    interval: timedelta = timedelta(seconds=2),
) -> DataReview

Polls the data review until it is completed.

reload ¤

reload() -> DataReview

Reloads the data review from the server.

DataReviewBuilder dataclass ¤

DataReviewBuilder(
    _integration_rids: list[str],
    _requests: list[CreateDataReviewRequest],
    _clients: _Clients,
)

initiate ¤

Initiates a batch data review process.

Parameters:

  • wait_for_completion ¤

    (bool, default: True ) –

    If True, waits for the data review process to complete before returning. Default is True.

Dataset dataclass ¤

Dataset(
    rid: str,
    name: str,
    description: str | None,
    properties: Mapping[str, str],
    labels: Sequence[str],
    bounds: DatasetBounds | None,
    _clients: _Clients,
)

Bases: HasRid

nominal_url property ¤

nominal_url: str

Returns a URL to the page in the nominal app containing this dataset

add_csv_to_dataset ¤

add_csv_to_dataset(
    path: Path | str,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
) -> None

Append to a dataset from a csv on-disk.

add_data_to_dataset ¤

add_data_to_dataset(
    path: Path | str,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
) -> None

Append to a dataset from data on-disk.

add_mcap_to_dataset ¤

add_mcap_to_dataset(
    path: Path | str,
    include_topics: Iterable[str] | None = None,
    exclude_topics: Iterable[str] | None = None,
) -> None

Add an MCAP file to an existing dataset.

add_to_dataset_from_io ¤

add_to_dataset_from_io(
    dataset: BinaryIO,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
    file_type: tuple[str, str] | FileType = CSV,
) -> None

Append to a dataset from a file-like object.

file_type: a (extension, mimetype) pair describing the type of file.

archive ¤

archive() -> None

Archive this dataset. Archived datasets are not deleted, but are hidden from the UI.

get_channels ¤

get_channels(
    exact_match: Sequence[str] = (), fuzzy_search_text: str = ""
) -> Iterable[Channel]

Look up the metadata for all matching channels associated with this dataset. NOTE: Provided channels may also be associated with other datasets-- use with caution.


exact_match: Filter the returned channels to those whose names match all provided strings
    (case insensitive).
    For example, a channel named 'engine_turbine_rpm' would match against ['engine', 'turbine', 'rpm'],
    whereas a channel named 'engine_turbine_flowrate' would not!
fuzzy_search_text: Filters the returned channels to those whose names fuzzily match the provided string.

Yields a sequence of channel metadata objects which match the provided query parameters

poll_until_ingestion_completed ¤

poll_until_ingestion_completed(
    interval: timedelta = timedelta(seconds=1),
) -> Self

Block until dataset ingestion has completed. This method polls Nominal for ingest status after uploading a dataset on an interval.


NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known

set_channel_prefix_tree ¤

set_channel_prefix_tree(delimiter: str = '.') -> None

Index channels hierarchically by a given delimiter.

Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.

set_channel_units ¤

set_channel_units(
    channels_to_units: Mapping[str, str | None],
    validate_schema: bool = False,
) -> None

Set units for channels based on a provided mapping of channel names to units.


channels_to_units: A mapping of channel names to unit symbols.
    NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
    `channels_to_units`. Default is False.

ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.

to_pandas ¤

to_pandas(
    channel_exact_match: Sequence[str] = (),
    channel_fuzzy_search_text: str = "",
) -> DataFrame

Download a dataset to a pandas dataframe, optionally filtering for only specific channels of the dataset.


channel_exact_match: Filter the returned channels to those whose names match all provided strings
    (case insensitive).
    For example, a channel named 'engine_turbine_rpm' would match against ['engine', 'turbine', 'rpm'],
    whereas a channel named 'engine_turbine_flowrate' would not!
channel_fuzzy_search_text: Filters the returned channels to those whose names fuzzily match the provided
    string.

A pandas dataframe whose index is the timestamp of the data, and column names match those of the selected
    channels.
Example:¤
import nominal as nm

rid = "..." # Taken from the UI or via the SDK
dataset = nm.get_dataset(rid)
s = dataset.to_pandas()
print("index:", s.index, "index mean:", s.index.mean())

unarchive ¤

unarchive() -> None

Unarchives this dataset, allowing it to show up in the 'All Datasets' pane in the UI.

update ¤

update(
    *,
    name: str | None = None,
    description: str | None = None,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] | None = None
) -> Self

Replace dataset metadata. Updates the current instance, and returns it.

Only the metadata passed in will be replaced, the rest will remain untouched.

Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:

new_labels = ["new-label-a", "new-label-b"]
for old_label in dataset.labels:
    new_labels.append(old_label)
dataset = dataset.update(labels=new_labels)

LogSet dataclass ¤

LogSet(
    rid: str,
    name: str,
    timestamp_type: LogTimestampType,
    description: str | None,
    _clients: _Clients,
)

Bases: HasRid

stream_logs ¤

stream_logs() -> Iterable[Log]

Iterate over the logs.

NominalClient dataclass ¤

NominalClient(_clients: ClientsBunch)

checklist_builder ¤

checklist_builder(
    name: str,
    description: str = "",
    assignee_email: str | None = None,
    assignee_rid: str | None = None,
    default_ref_name: str | None = None,
) -> ChecklistBuilder

Creates a checklist builder.

You can provide one of assignee_email or assignee_rid. If neither are provided, the rid for the user executing the script will be used as the assignee. If both are provided, a ValueError is raised.

create classmethod ¤

create(
    base_url: str,
    token: str | None,
    trust_store_path: str | None = None,
    connect_timeout: float = 30,
) -> Self

Create a connection to the Nominal platform.

base_url: The URL of the Nominal API platform, e.g. "https://api.gov.nominal.io/api". token: An API token to authenticate with. By default, the token will be looked up in ~/.nominal.yml. trust_store_path: path to a trust store CA root file to initiate SSL connections. If not provided, certifi's trust store is used.

create_ardupilot_dataflash_dataset ¤

create_ardupilot_dataflash_dataset(
    path: Path | str,
    name: str | None,
    description: str | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None
) -> Dataset

Create a dataset from an ArduPilot DataFlash log file.

If name is None, the name of the file will be used.

See create_dataset_from_io for more details.

create_asset ¤

create_asset(
    name: str,
    description: str | None = None,
    *,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] = ()
) -> Asset

Create an asset.

create_attachment_from_io ¤

create_attachment_from_io(
    attachment: BinaryIO,
    name: str,
    file_type: tuple[str, str] | FileType = BINARY,
    description: str | None = None,
    *,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] = ()
) -> Attachment

Upload an attachment. The attachment must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.

create_csv_dataset ¤

create_csv_dataset(
    path: Path | str,
    name: str | None,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
    description: str | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None,
    prefix_tree_delimiter: str | None = None
) -> Dataset

Create a dataset from a CSV file.

If name is None, the name of the file will be used.

See create_dataset_from_io for more details.

create_dataset_from_io ¤

create_dataset_from_io(
    dataset: BinaryIO,
    name: str,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
    file_type: tuple[str, str] | FileType = CSV,
    description: str | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None,
    prefix_tree_delimiter: str | None = None
) -> Dataset

Create a dataset from a file-like object. The dataset must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.

Timestamp column types must be a CustomTimestampFormat or one of the following literals: "iso_8601": ISO 8601 formatted strings, "epoch_{unit}": epoch timestamps in UTC (floats or ints), "relative_{unit}": relative timestamps (floats or ints), where {unit} is one of: nanoseconds | microseconds | milliseconds | seconds | minutes | hours | days

create_log_set ¤

create_log_set(
    name: str,
    logs: (
        Iterable[Log]
        | Iterable[tuple[datetime | IntegralNanosecondsUTC, str]]
    ),
    timestamp_type: LogTimestampType = "absolute",
    description: str | None = None,
) -> LogSet

Create an immutable log set with the given logs.

The logs are attached during creation and cannot be modified afterwards. Logs can either be of type Log or a tuple of a timestamp and a string. Timestamp type must be either 'absolute' or 'relative'.

create_mcap_dataset ¤

create_mcap_dataset(
    path: Path | str,
    name: str | None,
    description: str | None = None,
    include_topics: Iterable[str] | None = None,
    exclude_topics: Iterable[str] | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None
) -> Dataset

Create a dataset from an MCAP file.

If name is None, the name of the file will be used.

If include_topics is None (default), all channels with a "protobuf" message encoding are included.

See create_dataset_from_io for more details on the other arguments.

create_run ¤

create_run(
    name: str,
    start: datetime | IntegralNanosecondsUTC,
    end: datetime | IntegralNanosecondsUTC | None,
    description: str | None = None,
    *,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] = (),
    attachments: Iterable[Attachment] | Iterable[str] = (),
    asset: Asset | str | None = None
) -> Run

Create a run.

create_tabular_dataset ¤

create_tabular_dataset(
    path: Path | str,
    name: str | None,
    timestamp_column: str,
    timestamp_type: _AnyTimestampType,
    description: str | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None,
    prefix_tree_delimiter: str | None = None
) -> Dataset

Create a dataset from a table-like file (CSV, parquet, etc.).

If name is None, the name of the file will be used.

See create_dataset_from_io for more details.

create_video ¤

create_video(
    path: Path | str,
    name: str | None,
    start: datetime | IntegralNanosecondsUTC | None = None,
    frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
    description: str | None = None,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None
) -> Video

Create a video from an h264/h265 encoded video file (mp4, mkv, ts, etc.).

If name is None, the name of the file will be used.

See create_video_from_io for more details.

create_video_from_io ¤

create_video_from_io(
    video: BinaryIO,
    name: str,
    start: datetime | IntegralNanosecondsUTC | None = None,
    frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
    description: str | None = None,
    file_type: tuple[str, str] | FileType = MP4,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None
) -> Video

Create a video from a file-like object. The video must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.


video: file-like object to read video data from
name: Name of the video to create in Nominal
start: Starting timestamp of the video
frame_timestamps: Per-frame timestamps (in nanoseconds since unix epoch) for every frame of the video
description: Description of the video to create in nominal
file_type: Type of data being uploaded
labels: Labels to apply to the video in nominal
properties: Properties to apply to the video in nominal

Handle to the created video
Note:¤
Exactly one of 'start' and 'frame_timestamps' **must** be provided. Most users will
want to provide a starting timestamp: frame_timestamps is primarily useful when the scale
of the video data is not 1:1 with the playback speed or non-uniform over the course of the video,
for example, 200fps video artificially slowed to 30 fps without dropping frames. This will result
in the playhead on charts within the product playing at the rate of the underlying data rather than
time elapsed in the video playback.

create_video_from_mcap_io ¤

create_video_from_mcap_io(
    mcap: BinaryIO,
    topic: str,
    name: str,
    description: str | None = None,
    file_type: tuple[str, str] | FileType = MCAP,
    *,
    labels: Sequence[str] = (),
    properties: Mapping[str, str] | None = None
) -> Video

Create video from topic in a mcap file.

Mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.

If name is None, the name of the file will be used.

get_all_units ¤

get_all_units() -> Sequence[Unit]

Retrieve list of metadata for all supported units within Nominal

get_asset ¤

get_asset(rid: str) -> Asset

Retrieve an asset by its RID.

get_attachment ¤

get_attachment(rid: str) -> Attachment

Retrieve an attachment by its RID.

get_attachments ¤

get_attachments(rids: Iterable[str]) -> Sequence[Attachment]

Retrive attachments by their RIDs.

get_channel ¤

get_channel(rid: str) -> Channel

Get metadata for a given channel by looking up its rid Args: rid: Identifier for the channel to look up Returns: Resolved metadata for the requested channel Raises: conjure_python_client.ConjureHTTPError: An error occurred while looking up the channel. This typically occurs when there is no such channel for the given RID.

get_commensurable_units ¤

get_commensurable_units(unit_symbol: str) -> Sequence[Unit]

Get the list of units that are commensurable (convertible to/from) the given unit symbol.

get_connection ¤

get_connection(rid: str) -> Connection

Retrieve a connection by its RID.

get_dataset ¤

get_dataset(rid: str) -> Dataset

Retrieve a dataset by its RID.

get_datasets ¤

get_datasets(rids: Iterable[str]) -> Sequence[Dataset]

Retrieve datasets by their RIDs.

get_log_set ¤

get_log_set(log_set_rid: str) -> LogSet

Retrieve a log set along with its metadata given its RID.

get_run ¤

get_run(rid: str) -> Run

Retrieve a run by its RID.

get_unit ¤

get_unit(unit_symbol: str) -> Unit | None

Get details of the given unit symbol, or none if invalid Args: unit_symbol: Symbol of the unit to get metadata for. NOTE: This currently requires that units are formatted as laid out in the latest UCUM standards (see https://ucum.org/ucum)


Rendered Unit metadata if the symbol is valid and supported by Nominal, or None
if no such unit symbol matches.

get_user ¤

get_user() -> User

Retrieve the user associated with this client.

get_video ¤

get_video(rid: str) -> Video

Retrieve a video by its RID.

get_videos ¤

get_videos(rids: Iterable[str]) -> Sequence[Video]

Retrieve videos by their RID.

list_streaming_checklists ¤

list_streaming_checklists(
    asset: Asset | str | None = None,
) -> Iterable[str]

List all Streaming Checklists.

Parameters:

  • asset ¤

    (Asset | str | None, default: None ) –

    if provided, only return checklists associated with the given asset.

search_assets ¤

search_assets(
    search_text: str | None = None,
    label: str | None = None,
    property: tuple[str, str] | None = None,
    *,
    labels: Sequence[str] | None = None,
    properties: Mapping[str, str] | None = None
) -> Sequence[Asset]

Search for assets meeting the specified filters. Filters are ANDed together, e.g. (asset.label == label) AND (asset.search_text =~ field)

Parameters:

  • search_text ¤

    (str | None, default: None ) –

    case-insensitive search for any of the keywords in all string fields

  • label ¤

    (str | None, default: None ) –

    Deprecated, use labels instead.

  • property ¤

    (tuple[str, str] | None, default: None ) –

    Deprecated, use properties instead.

  • labels ¤

    (Sequence[str] | None, default: None ) –

    A sequence of labels that must ALL be present on a asset to be included.

  • properties ¤

    (Mapping[str, str] | None, default: None ) –

    A mapping of key-value pairs that must ALL be present on a asset to be included.

Returns:

  • Sequence[Asset]

    All assets which match all of the provided conditions

search_runs ¤

search_runs(
    start: str | datetime | IntegralNanosecondsUTC | None = None,
    end: str | datetime | IntegralNanosecondsUTC | None = None,
    name_substring: str | None = None,
    label: str | None = None,
    property: tuple[str, str] | None = None,
    *,
    labels: Sequence[str] | None = None,
    properties: Mapping[str, str] | None = None
) -> Sequence[Run]

Search for runs meeting the specified filters. Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)

Parameters:

  • start ¤

    (str | datetime | IntegralNanosecondsUTC | None, default: None ) –

    Inclusive start time for filtering runs.

  • end ¤

    (str | datetime | IntegralNanosecondsUTC | None, default: None ) –

    Inclusive end time for filtering runs.

  • name_substring ¤

    (str | None, default: None ) –

    Searches for a (case-insensitive) substring in the name.

  • label ¤

    (str | None, default: None ) –

    Deprecated, use labels instead.

  • property ¤

    (tuple[str, str] | None, default: None ) –

    Deprecated, use properties instead.

  • labels ¤

    (Sequence[str] | None, default: None ) –

    A sequence of labels that must ALL be present on a run to be included.

  • properties ¤

    (Mapping[str, str] | None, default: None ) –

    A mapping of key-value pairs that must ALL be present on a run to be included.

Returns:

  • Sequence[Run]

    All runs which match all of the provided conditions

set_channel_units ¤

set_channel_units(
    rids_to_types: Mapping[str, str | None]
) -> Sequence[Channel]

Sets the units for a set of channels based on user-provided unit symbols Args: rids_to_types: Mapping of channel RIDs -> unit symbols (e.g. 'm/s'). NOTE: Providing None as the unit symbol clears any existing units for the channels.


A sequence of metadata for all updated channels

Raises: conjure_python_client.ConjureHTTPError: An error occurred while setting metadata on the channel. This typically occurs when either the units are invalid, or there are no channels with the given RIDs present.

Run dataclass ¤

Run(
    rid: str,
    name: str,
    description: str,
    properties: Mapping[str, str],
    labels: Sequence[str],
    start: IntegralNanosecondsUTC,
    end: IntegralNanosecondsUTC | None,
    run_number: int,
    _clients: _Clients,
)

Bases: HasRid

nominal_url property ¤

nominal_url: str

Returns a link to the page for this Run in the Nominal app

add_attachments ¤

add_attachments(
    attachments: Iterable[Attachment] | Iterable[str],
) -> None

Add attachments that have already been uploaded to this run.

attachments can be Attachment instances, or attachment RIDs.

add_connection ¤

add_connection(
    ref_name: str,
    connection: Connection | str,
    *,
    series_tags: dict[str, str] | None = None
) -> None

Add a connection to this run.

Ref_name maps "ref name" (the name within the run) to a Connection (or connection rid). The same type of connection should use the same ref name across runs, since checklists and templates use ref names to reference connections.

add_dataset ¤

add_dataset(ref_name: str, dataset: Dataset | str) -> None

Add a dataset to this run.

Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.

add_datasets ¤

add_datasets(datasets: Mapping[str, Dataset | str]) -> None

Add multiple datasets to this run.

Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.

add_log_set ¤

add_log_set(ref_name: str, log_set: LogSet | str) -> None

Add a log set to this run.

Log sets map "ref names" (their name within the run) to a Log set (or log set rid).

add_log_sets ¤

add_log_sets(log_sets: Mapping[str, LogSet | str]) -> None

Add multiple log sets to this run.

Log sets map "ref names" (their name within the run) to a Log set (or log set rid).

add_video ¤

add_video(ref_name: str, video: Video | str) -> None

Add a video to a run via video object or RID.

archive ¤

archive() -> None

Archive this run. Archived runs are not deleted, but are hidden from the UI.

NOTE: currently, it is not possible (yet) to unarchive a run once archived.

list_assets ¤

list_assets() -> Sequence[Asset]

List assets associated with this run.

list_attachments ¤

list_attachments() -> Sequence[Attachment]

List a sequence of Attachments associated with this Run.

list_connections ¤

list_connections() -> Sequence[tuple[str, Connection]]

List the connections associated with this run. Returns (ref_name, connection) pairs for each connection

list_datasets ¤

list_datasets() -> Sequence[tuple[str, Dataset]]

List the datasets associated with this run. Returns (ref_name, dataset) pairs for each dataset.

list_log_sets ¤

list_log_sets() -> Sequence[tuple[str, LogSet]]

List the log_sets associated with this run. Returns (ref_name, logset) pairs for each logset.

list_videos ¤

list_videos() -> Sequence[tuple[str, Video]]

List a sequence of refname, Video tuples associated with this Run.

remove_attachments ¤

remove_attachments(
    attachments: Iterable[Attachment] | Iterable[str],
) -> None

Remove attachments from this run. Does not remove the attachments from Nominal.

attachments can be Attachment instances, or attachment RIDs.

remove_data_sources ¤

remove_data_sources(
    *,
    ref_names: Sequence[str] | None = None,
    data_sources: (
        Sequence[Connection | Dataset | Video | str] | None
    ) = None
) -> None

Remove data sources from this run.

The list data_sources can contain Connection, Dataset, Video instances, or rids as string.

update ¤

update(
    *,
    name: str | None = None,
    start: datetime | IntegralNanosecondsUTC | None = None,
    end: datetime | IntegralNanosecondsUTC | None = None,
    description: str | None = None,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] | None = None,
    links: Sequence[str] | Sequence[Link] | None = None
) -> Self

Replace run metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.

Links can be URLs or tuples of (URL, name).

Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:

new_labels = ["new-label-a", "new-label-b"]
for old_label in run.labels:
    new_labels.append(old_label)
run = run.update(labels=new_labels)

Video dataclass ¤

Video(
    rid: str,
    name: str,
    description: str | None,
    properties: Mapping[str, str],
    labels: Sequence[str],
    _clients: _Clients,
)

Bases: HasRid

archive ¤

archive() -> None

Archive this video. Archived videos are not deleted, but are hidden from the UI.

poll_until_ingestion_completed ¤

poll_until_ingestion_completed(
    interval: timedelta = timedelta(seconds=1),
) -> None

Block until video ingestion has completed. This method polls Nominal for ingest status after uploading a video on an interval.


NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known

unarchive ¤

unarchive() -> None

Unarchives this video, allowing it to show up in the 'All Videos' pane in the UI.

update ¤

update(
    *,
    name: str | None = None,
    description: str | None = None,
    properties: Mapping[str, str] | None = None,
    labels: Sequence[str] | None = None
) -> Self

Replace video metadata. Updates the current instance, and returns it.

Only the metadata passed in will be replaced, the rest will remain untouched.

Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:

new_labels = ["new-label-a", "new-label-b"]
for old_label in video.labels:
    new_labels.append(old_label)
video = video.update(labels=new_labels)

Workbook dataclass ¤

Workbook(
    rid: str,
    title: str,
    description: str,
    run_rid: str | None,
    _clients: _Clients,
)

Bases: HasRid

nominal_url property ¤

nominal_url: str

Returns a link to the page for this Workbook in the Nominal app

archive ¤

archive() -> None

Archive this workbook. Archived workbooks are not deleted, but are hidden from the UI.

unarchive ¤

unarchive() -> None

Unarchive this workbook, allowing it to be viewed in the UI.

WriteStream dataclass ¤

WriteStream(
    batch_size: int,
    max_wait: timedelta,
    _process_batch: Callable[[Sequence[BatchItem]], None],
    _executor: ThreadPoolExecutor,
    _thread_safe_batch: ThreadSafeBatch,
    _stop: Event,
    _pending_jobs: BoundedSemaphore,
)

close ¤

close(wait: bool = True) -> None

Close the Nominal Stream.

Stop the process timeout thread Flush any remaining batches

create classmethod ¤

create(
    batch_size: int,
    max_wait: timedelta,
    process_batch: Callable[[Sequence[BatchItem]], None],
) -> Self

Create the stream.

enqueue ¤

enqueue(
    channel_name: str,
    timestamp: str | datetime | IntegralNanosecondsUTC,
    value: float | str,
    tags: dict[str, str] | None = None,
) -> None

Add a message to the queue.

The message will not be immediately sent to Nominal. Only after the batch size is full or the timeout occurs.

enqueue_batch ¤

enqueue_batch(
    channel_name: str,
    timestamps: Sequence[str | datetime | IntegralNanosecondsUTC],
    values: Sequence[float | str],
    tags: dict[str, str] | None = None,
) -> None

Add a sequence of messages to the queue.

The messages will not be immediately sent to Nominal. Only after the batch size is full or the timeout occurs.

flush ¤

flush(wait: bool = False, timeout: float | None = None) -> None

Flush current batch of records to nominal in a background thread.


wait: If true, wait for the batch to complete uploading before returning
timeout: If wait is true, the time to wait for flush completion.
         NOTE: If none, waits indefinitely.

poll_until_ingestion_completed ¤

poll_until_ingestion_completed(
    datasets: Iterable[Dataset],
    interval: timedelta = timedelta(seconds=1),
) -> None

Block until all dataset ingestions have completed (succeeded or failed).

This method polls Nominal for ingest status on each of the datasets on an interval. No specific ordering is guaranteed, but all datasets will be checked at least once.


NominalIngestMultiError: if any of the datasets failed to ingest