Skip to content

Conversation

Bye-legumes
Copy link
Contributor

@Bye-legumes Bye-legumes commented Apr 8, 2025

Why are these changes needed?

Close #45755.
This PR addresses the need for enhanced GPU usage metrics at the task/actor level in the Ray dashboard. Currently, the Ray dashboard provides detailed CPU and memory usage metrics for individual tasks and actors, but lacks similar granularity for GPU metrics. This enhancement aims to fill that gap by introducing per-task/actor GPU utilization and memory usage metrics.


Area Change + / –
dashboard/agent.py, dashboard/modules/stats_collector.py Collect per-GPU SM, memory-used, memory-total and temperature using NVML (fallback to nvidia-smi --query-gpu if NVML is not available). +307 LOC
dashboard/frontend/src/pages/node/Stats.vue New GPU bars beside existing CPU/Mem charts; shows live %, absolute MiB and thermals with colour-coded alert gradients. +62 LOC
dashboard/frontend/src/components/ResourceIcon.tsx Adds gpu-core, gpu-mem icons and tooltip helpers. +18 LOC
python/ray/dashboard/tests/test_gpu_stats.py E2E integration test that spins up a fake GPU via CUDA_VISIBLE_DEVICES=0 + mock NVML bindings to assert Dashboard JSON schema and time-series values. +20 LOC
Misc. Typo fixes, pylint: disable=c-extension-no-member guards, build-time NVML check in setup.py. –12 LOC

image
image

Related issue number

Close #45755.

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhilong <[email protected]>
@Bye-legumes
Copy link
Contributor Author

Bye-legumes commented Apr 8, 2025

import ray
import torch
import os

# Initialize Ray, using all available GPUs
ray.init()

# Check if CUDA is available
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device count: {torch.cuda.device_count()}")


@ray.remote(num_gpus=1)
class TorchGPUWorker:
    def __init__(self):
        assert torch.cuda.is_available(), "CUDA is not available"
        self.device = torch.device("cuda")
        print(f"Worker running on device: {self.device}")

    def matrix_multiply(self, size=20000):
        # Create two large random tensors on GPU
        a = torch.randn(size, size, device=self.device)
        b = torch.randn(size, size, device=self.device)
        result = torch.matmul(a, b)

        # Return just the norm to reduce transfer cost
        return result.norm().item()


if __name__ == "__main__":
    # Create an actor
    gpu_worker = TorchGPUWorker.remote()

    # Run a GPU task
    result = ray.get(gpu_worker.matrix_multiply.remote(2048))

    print(f"Result norm of matrix multiply on GPU: {result}")

@hainesmichaelc hainesmichaelc added the community-contribution Contributed by the community label Apr 9, 2025
zhaoch23 added 2 commits April 9, 2025 16:22
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
@jcotant1 jcotant1 added dashboard Issues specific to the Ray Dashboard observability Issues related to the Ray Dashboard, Logging, Metrics, Tracing, and/or Profiling labels Apr 10, 2025
zhaoch23 and others added 4 commits April 10, 2025 16:46
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
@zhaoch23
Copy link
Contributor

zhaoch23 commented Apr 10, 2025

image
image

@zhaoch23
Copy link
Contributor

Script to test:

import ray
import torch
import os
import time
# Initialize Ray, using all available GPUs
ray.init()

# Check if CUDA is available
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA device count: {torch.cuda.device_count()}")


@ray.remote(num_gpus=0.5)
class TorchGPUWorker:
    def __init__(self):
        assert torch.cuda.is_available(), "CUDA is not available"
        print(os.getenv("CUDA_VISIBLE_DEVICES"))
        self.device = torch.device("cuda")
        print(f"Worker running on device: {self.device}")

    def matrix_multiply(self, size=16384):
        a = torch.randn(size, size, device=self.device)
        b = torch.randn(size, size, device=self.device)

        torch.matmul(a, b)
        torch.cuda.synchronize()
        
        REPEATS = 6 

        start = time.time()
        for _ in range(REPEATS):
            c = torch.matmul(a, b)
        torch.cuda.synchronize()



if __name__ == "__main__":
    # Create an actor
    gpu_workers = [TorchGPUWorker.remote() for i in range(4)]

    # Run a GPU task
    for i in range(100):
        result = ray.get([worker.matrix_multiply.remote() for worker in gpu_workers])
        print(f"Result norm of matrix multiply on GPU: {result}")

Signed-off-by: zhaoch23 <[email protected]>
@Bye-legumes Bye-legumes changed the title [WIP][Dashboard] Add GPU component usage [Dashboard] Add GPU component usage Apr 11, 2025
Comment on lines 466 to 479
expr="sum(ray_component_gpu_utilization{{{global_filters}}} / 100) by (Component, pid, GpuIndex, GpuDeviceName)",
legend="{{Component}}::{{pid}}, gpu.{{GpuIndex}}, {{GpuDeviceName}}",
),
],
),
Panel(
id=46,
title="Component GPU Memory Usage",
description="GPU memory usage of Ray components.",
unit="bytes",
targets=[
Target(
expr="sum(ray_component_gpu_memory_usage{{{global_filters}}}) by (Component, pid, GpuIndex, GpuDeviceName)",
legend="{{Component}}::{{pid}}, gpu.{{GpuIndex}}, {{GpuDeviceName}}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's remove pid to align with the Node CPU component graph.

if pid == "-": # no process on this GPU
continue
gpu_id = int(gpu_id)
pinfo = ProcessGPUInfo(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use a different type here? Since gpu_memory_usage is of type Megabytes. It's very confusing for it to be a percentage and may introduce tricky bugs later.

if nv_process.usedGpuMemory
else 0
),
gpu_utilization=None, # Not available in pynvml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we match the Ray Dashboard behavior where we show the total gpu utilization for gpu that the process attaches to. Not necessarily the utilization exclusive to that process.

The nvdia-smi parsing is a fragile. I don't know what backwards compatability guarantees nvidia-smi provides

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean we give up nvidia-smi? Or we implement a fallback strategy that use the pynvml to display the total gpu utilization if nvidia-smi is not available?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, lets remove the usage of nvidia-smi and just use pynvml all the time. We can add nvidia-smi at a later time if there is enough demand. But I think for most usecases the pynvml approach should be good enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline. We will be adding the nvidia-smi dependency. We will add a test validating the output of nvidia-smi pmon. We will also update the ray dashboard UI to utilize the gpu utilization value from nvidia-smi instead of pynvml.

@alanwguo
Copy link
Contributor

I did some manual testing and didn't seem to get any metrics for component_gpu_utilization even though I got metrics for component_memory_usage

Screenshot 2025-05-13 at 5 14 36 PM

In my worker node, this is the output i get from nvidia-smi:

$ nvidia-smi --query-gpu=index,name,uuid,utilization.gpu,memory.used,memory.total --format=csv,noheader,nounits
0, Tesla T4, GPU-9b0c908a-c921-4693-783f-534cb205ec77, 100, 9059, 15360
1, Tesla T4, GPU-15797035-31fa-a236-16b1-209b8dd896dd, 100, 9059, 15360
2, Tesla T4, GPU-f20b2bc8-2492-322f-551b-23e8ef87130f, 100, 9059, 15360
3, Tesla T4, GPU-2ac2bffc-c188-fa30-3710-21d904c37691, 100, 9059, 15360

$ nvidia-smi pmon -c 1
# gpu         pid   type     sm    mem    enc    dec    jpg    ofa    command 
# Idx           #    C/G      %      %      %      %      %      %    name 
    0       5569     C     94     84      -      -      -      -    ray::RayTrainWo
    1       5718     C     91     80      -      -      -      -    ray::RayTrainWo
    2       5716     C     91     81      -      -      -      -    ray::RayTrainWo
    3       5717     C     90     81      -      -      -      -    ray::RayTrainWo

$ curl localhost:8085/metrics | grep component_gpu
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  125k  100  125k    0     0  20.0M      0 # HELP ray_component_gpu_memory_usage GPU memory usage of all components on the node.
--:# TYPE ray_component_gpu_memory_usage gauge
--ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="2",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5716"} 9.495904256e+09
:-ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="3",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5717"} 9.495904256e+09
- ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="1",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5718"} 9.495904256e+09
--ray_component_gpu_memory_usage{Component="ray::RayTrainWorker",GpuDeviceName="Tesla T4",GpuIndex="0",IsHeadNode="",SessionName="session_2025-05-13_16-42-55_367950_2270",Version="3.0.0.dev0",ip="",pid="5569"} 9.495904256e+09

@zhaoch23
Copy link
Contributor

I have fixed some potential parsing error. This is what is looks like on my side:
image
Please let me know if this fix works on your environment. @alanwguo

zhaoch23 added 4 commits July 25, 2025 17:06
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
# Whether GPU metrics collection via `nvidia-smi` is enabled.
# Controlled by the environment variable `RAY_metric_enable_gpu_nvsmi`.
# Defaults to False to use pynvml to collect usage.
RAY_METRIC_ENABLE_GPU_NVSMI = env_bool("RAY_metric_enable_gpu_nvsmi", False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we turn it on by default, otherwise the code path will never be tested and no one is going to use this feature

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed offline: we will gradually roll it out and we also have the follow up to use c nvml directly instead of nvida-smi CLI

Comment on lines +264 to +266
0 if columns[pid_index] == "-" else int(columns[pid_index]),
0 if columns[sm_index] == "-" else int(columns[sm_index]),
0 if columns[mem_index] == "-" else int(columns[mem_index]),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it guaranteed to be integer, never float?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are int types according to NVML.

Comment on lines +250 to +253
gpu_id_index = table_header.index("gpu")
pid_index = table_header.index("pid")
sm_index = table_header.index("sm")
mem_index = table_header.index("mem")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if these headers don't exist?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pmon is expected to have all the following fields. Now a ValueError is raised and handled if the header or any of these fields are missing

Comment on lines +922 to +929
# Build process ID -> GPU info mapping for faster lookups
gpu_pid_mapping = defaultdict(list)
if gpus is not None:
for gpu in gpus:
processes = gpu.get("processes_pids")
if processes:
for proc in processes.values():
gpu_pid_mapping[proc.pid].append(proc)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no longer needed since gpu_ids is already a dict

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One pid can be in multiple gpus. Here we create a flat map gathering by the pids among all gpus.

Comment on lines +1241 to +1258
"""
tags = {"ip": self._ip, "Component": component_name}

records = []
records.append(
Record(
gauge=METRICS_GAUGES["component_gpu_memory_mb"],
value=0.0,
tags=tags,
)
)
records.append(
Record(
gauge=METRICS_GAUGES["component_gpu_percentage"],
value=0.0,
tags=tags,
)
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can just be moved to _generate_reseted_stats_record

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlike cpu processes, a gpu process is deemed stale as soon as it stops using gpu resources, even if it remains alive. To address this, we implemented a separate method for generating and resetting gpu statistics.

Comment on lines 1283 to 1289

# Track if this process has GPU usage
if (
stat.get("gpu_memory_usage", 0) > 0
or stat.get("gpu_utilization", 0) > 0
):
gpu_proc.add(proc_name)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need all these code?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly to the above, this code tracks only processes with positive gpu utilization to identify stale gpu processes and generate corresponding reset records for them.

Copy link
Collaborator

@jjyao jjyao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG. Just some naming nits.

@jjyao
Copy link
Collaborator

jjyao commented Jul 30, 2025

@zhaoch23 @Bye-legumes there are some lint failures.

Signed-off-by: zhaoch23 <[email protected]>
@Bye-legumes
Copy link
Contributor Author

Follow UP

use nvml with pybind to implement pynvml to get the GPU usage directly.

@jjyao jjyao merged commit 2f7a215 into ray-project:master Jul 31, 2025
5 checks passed
avibasnet31 pushed a commit to avibasnet31/ray that referenced this pull request Aug 2, 2025
Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhilong <[email protected]>
Co-authored-by: zhaoch23 <[email protected]>
Co-authored-by: Jiajun Yao <[email protected]>
Signed-off-by: avigyabb <[email protected]>
avibasnet31 pushed a commit to avibasnet31/ray that referenced this pull request Aug 2, 2025
Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhilong <[email protected]>
Co-authored-by: zhaoch23 <[email protected]>
Co-authored-by: Jiajun Yao <[email protected]>
Signed-off-by: avigyabb <[email protected]>
elliot-barn pushed a commit that referenced this pull request Aug 4, 2025
Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhilong <[email protected]>
Co-authored-by: zhaoch23 <[email protected]>
Co-authored-by: Jiajun Yao <[email protected]>
kamil-kaczmarek pushed a commit that referenced this pull request Aug 4, 2025
Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhilong <[email protected]>
Co-authored-by: zhaoch23 <[email protected]>
Co-authored-by: Jiajun Yao <[email protected]>
Signed-off-by: Kamil Kaczmarek <[email protected]>
mjacar pushed a commit to mjacar/ray that referenced this pull request Aug 5, 2025
Signed-off-by: zhilong <[email protected]>
Signed-off-by: zhaoch23 <[email protected]>
Signed-off-by: zhilong <[email protected]>
Co-authored-by: zhaoch23 <[email protected]>
Co-authored-by: Jiajun Yao <[email protected]>
Signed-off-by: Michael Acar <[email protected]>
This was referenced Aug 27, 2025
alanwguo added a commit that referenced this pull request Aug 28, 2025
<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
Bugs introduced in #52102 

Two bugs:
- proc is a TypedDict so it needs to be fetched via `proc["pid"]`
instead of `proc.pid`.
- Changing `processes_pid` is backwards-incompatible change that ends up
changing the dashboard APIs that power the ray dashboard. Maintain
backwards-compatibility

<!-- Please give a short summary of the change and the problem this
solves. -->

Verified fix:
Metrics work again:
<img width="947" height="441" alt="Screenshot 2025-08-27 at 12 22 40 PM"
src="https://github.com/user-attachments/assets/0a9a83e7-b720-4ad0-b90e-1baa394edde5"
/>


Ray Dashboard works again:
<img width="1824" height="1029" alt="Screenshot 2025-08-27 at 12 21
51 PM"
src="https://github.com/user-attachments/assets/6b0e08e4-69c9-4223-b736-ff69b8d306db"
/>


## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Alan Guo <[email protected]>
aslonnie pushed a commit that referenced this pull request Aug 28, 2025
cherrypick #56009

Bugs introduced in #52102, Two bugs:
- proc is a TypedDict so it needs to be fetched via `proc["pid"]`
instead of `proc.pid`.
- Changing `processes_pid` is backwards-incompatible change that ends up
changing the dashboard APIs that power the ray dashboard. Maintain
backwards-compatibility

Verified fix:
Metrics work again:
<img width="947" height="441" alt="Screenshot 2025-08-27 at 12 22 40 PM"
src="https://github.com/user-attachments/assets/0a9a83e7-b720-4ad0-b90e-1baa394edde5"
/>


Ray Dashboard works again:
<img width="1824" height="1029" alt="Screenshot 2025-08-27 at 12 21
51 PM"
src="https://github.com/user-attachments/assets/6b0e08e4-69c9-4223-b736-ff69b8d306db"
/>

---------

Signed-off-by: Alan Guo <[email protected]>
tohtana pushed a commit to tohtana/ray that referenced this pull request Aug 29, 2025
<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
Bugs introduced in ray-project#52102

Two bugs:
- proc is a TypedDict so it needs to be fetched via `proc["pid"]`
instead of `proc.pid`.
- Changing `processes_pid` is backwards-incompatible change that ends up
changing the dashboard APIs that power the ray dashboard. Maintain
backwards-compatibility

<!-- Please give a short summary of the change and the problem this
solves. -->

Verified fix:
Metrics work again:
<img width="947" height="441" alt="Screenshot 2025-08-27 at 12 22 40 PM"
src="https://github.com/user-attachments/assets/0a9a83e7-b720-4ad0-b90e-1baa394edde5"
/>

Ray Dashboard works again:
<img width="1824" height="1029" alt="Screenshot 2025-08-27 at 12 21
51 PM"
src="https://github.com/user-attachments/assets/6b0e08e4-69c9-4223-b736-ff69b8d306db"
/>

## Related issue number

<!-- For example: "Closes ray-project#1234" -->

## Checks

- [x] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Alan Guo <[email protected]>
Signed-off-by: Masahiro Tanaka <[email protected]>
tohtana pushed a commit to tohtana/ray that referenced this pull request Aug 29, 2025
<!-- Thank you for your contribution! Please review
https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before
opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
Bugs introduced in ray-project#52102

Two bugs:
- proc is a TypedDict so it needs to be fetched via `proc["pid"]`
instead of `proc.pid`.
- Changing `processes_pid` is backwards-incompatible change that ends up
changing the dashboard APIs that power the ray dashboard. Maintain
backwards-compatibility

<!-- Please give a short summary of the change and the problem this
solves. -->

Verified fix:
Metrics work again:
<img width="947" height="441" alt="Screenshot 2025-08-27 at 12 22 40 PM"
src="https://github.com/user-attachments/assets/0a9a83e7-b720-4ad0-b90e-1baa394edde5"
/>

Ray Dashboard works again:
<img width="1824" height="1029" alt="Screenshot 2025-08-27 at 12 21
51 PM"
src="https://github.com/user-attachments/assets/6b0e08e4-69c9-4223-b736-ff69b8d306db"
/>

## Related issue number

<!-- For example: "Closes ray-project#1234" -->

## Checks

- [x] I've signed off every commit(by using the -s flag, i.e., `git
commit -s`) in this PR.
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for
https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I
added a
method in Tune, I've added it in `doc/source/tune/api/` under the
           corresponding `.rst` file.
- [ ] I've made sure the tests are passing. Note that there might be a
few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
   - [ ] Unit tests
   - [ ] Release tests
   - [ ] This PR is not tested :(

---------

Signed-off-by: Alan Guo <[email protected]>
Signed-off-by: Masahiro Tanaka <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community-contribution Contributed by the community dashboard Issues specific to the Ray Dashboard go add ONLY when ready to merge, run all tests observability Issues related to the Ray Dashboard, Logging, Metrics, Tracing, and/or Profiling P1 Issue that should be fixed within a few weeks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Core] Show per task/actor GPU usage metric
7 participants