It's interesting that the above "HPC reference architecture" shows a GPU-to-GPU Infiniband fabric, despite Nvidia also nominally pushing NVLink Switch (https://www.nvidia.com/en-us/data-center/nvlink/) for the HPC use-case.
There is "CUDA-aware MPI" which would let you RDMA from device to device. But the more modern way would be MPI for the host communication and their own library NCCL for the device communication. NCCL has similar collective functions a MPI but runs on the device which makes it much more efficient to integrate in the flow of your kernels. But you would still generally bootstrap your processes and data through MPI.
I use OpenMPI with no issues over multiple H100 nodes and A100 nodes, with multiple infiniband 200G and ethernet 100G/200G networks, and RDMA (though using mellanox instead of broadcom cards, but afaik broadcom supports this just the same). Side note, make sure you compile nvidia_peermem correctly if you want GDRMA to work :)
It's interesting that the above "HPC reference architecture" shows a GPU-to-GPU Infiniband fabric, despite Nvidia also nominally pushing NVLink Switch (https://www.nvidia.com/en-us/data-center/nvlink/) for the HPC use-case.