VEOS:How to Use VE on a Virtual Machine
Date:2023/09
This document describes the information regarding The VEOS version 3.0.2 and higher
*Virtual machine can only be used with VE30.
*The management software/virtualization program can be used with VE is
libvirt/qemu-kvm.
*Basically, one VE can be used by a VM guest. When connecting multiple devices,
separate settings are required. See Note 2.
- How to assign VEs to virtual machines
Premise: Assume that the VM Guest is created as g1-rhel86.
1)Host BIOS Settings
Intel VT-x and AMD-V virtualization hardware extensions must be enabled.
In the BIOS menu, enable the following features, save the settings and reboot.
Please refer to the BIOS menu for the setting method.
Intel Machine :VT-d(Intel Virtualization Technology)
AMD Machine :IOMMU
2) Add kernel parameters to the host's Grub configuration file
2.1 Add the following to the end of the line for GRUB_CMDLINE_LINUX
in /etc/sysconfig/grub
INTEL Machine: intel_iommu=on iommu=pt
AMD Machine: amd_iommu=on iommu=pt
example:AMD Machine
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel0-swap \
rd.lvm.lv=rhel0/root rd.lvm.lv=rhel0/swap amd_iommu=on iommu=pt rhgb quiet"
2.2 Regenerate grub configuration files
(if /boot/efi/EFI/*/grub.cfg is UEFI based)
BIOS base : $grub2-mkconfig -o /boot/grub2/grub.cfg
UEFI base(RHEL) : $grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
UEFI base(Rocky): $grub2-mkconfig -o /boot/efi/EFI/rocky/grub.cfg
3) Reboot the host and check if VT-d or AMD IOMMU is enabled.
$ reboot
After booting, run the following to check if the settings are valid:
$if compgen -G "/sys/kernel/iommu_groups/*/devices/*" >/dev/null ;\
then echo IOMMU or VT-D is enabled; else echo IOMMU or VT-D is not enabled; fi
IOMMU or VT-D is enabled
If it is not displayed above, please check 1) and 2).
4) Additional updates to guest (g1-rhel86) settings.
4.1 If the guest (g1-rhel86) is already running, stop it here
$ virsh shutdown g1-rhel86
4.2 Modify and add the following + line to the guest (g1-rhel86) configuration file.
$ virsh edit g1-rhel86
Add the following + lines:
+
;;;
+
+
+
+
Save and exit.
5) Determine which VE to be assigned to the guest (g1-rhel86) among the VEs installed
on the host, and identify the VEs creates an XML file and assigns it to the guest
(G1-RHEL86).
5.1 Displays a list of VEs installed on the host.
You can use vecmd to identify your devices with identifying information
(domain (fixed 0000), bus, slot, function).
In the example below, there are 8 VEs, and from these, determine which VE to
assign to the guest.
$ /opt/nec/ve/bin/vecmd state all
Vector Engine MMM-Command v3.0.1.15
Command:
state -n 0,1,2,3,4,5,6,7 all
-------------------------------------------------------
[ve_state] [os_state] [ownership]
VE 0 [45:00.0] : AVAILABLE ONLINE 361173(veos)
VE 1 [46:00.0] : AVAILABLE ONLINE 361257(veos)
VE 2 [c6:00.0] : AVAILABLE ONLINE 361630(veos)
VE 3 [c5:00.0] : AVAILABLE ONLINE 361578(veos)
VE 4 [07:00.0] : AVAILABLE ONLINE 361132(veos)
VE 5 [06:00.0] : AVAILABLE ONLINE 361134(veos) *Use this VE as a guest
VE 6 [89:00.0] : AVAILABLE ONLINE 361350(veos)
VE 7 [8a:00.0] : AVAILABLE ONLINE 361472(veos)
-------------------------------------------------------
The device name and identification information for the selected VE 5 [06:00.0]
are as follows:
VE 5 [06:00.0]
| | | |
| | | +Function(0x0)
| | +Slot(0x00)
| +Bus(0x06)
|
| Domain is 0x0000 fixed
|
+ /dev/veslot5
So the ID of the VE device assigned to the guest is /dev/veslot5.
If you don't see the device you selected here, it's already assigned to another
guest.You can not use it. Please select another device.
5.2 Using the identifiers (domain, bus, slot, function),
create the XML file as follows:
The xml file that you created will also be used to unassign it to a guest,
so save it in a directory for guest management.
$vi g1-rhel86-veslot5.xml
5.3 Stop the veos you are using for veslot5.
$systemctl stop ve-os-launcher@5
$systemctl stop ve-os-state-monitor@5
5.4 Remove veslot5 device from mmm management.
$/opt/nec/ve/bin/vecmd -N 5 remove
5.5 Assign /dev/veslot5 to the guest (g1-rhel86) using the created
g1-rhel86-veslot5.xml file
$ virsh attach-device g1-rhel86 ./g1-rhel86-veslot5.xml --persistent
6) Start guest (g1-rhel86)
$virsh start g1-rhel86
Executing this command automatically removes the VE from the host and assigns
it to the guest. If the corresponding MMM and VEOS are running when removed
from the host, they will be killed.
7) Log in to the booted guest and make sure you can see the VE.
$ lspci | grep NEC
07:00.0 Co-processor: NEC Corporation Device 0039 (rev 01)
8) Installing VE-related software on the guest
How to set up VE related software on the host
(https://sxauroratsubasa.sakura.ne.jp/documents/guide/pdfs/InstallationGuide_E.pdf)
Install VE-related software on the guest (g1-rhel86) in the same way as above.
The settings are now complete. VE is available for guests.
9) Once configured, VE allocation works as follows:
Starting a guest automatically removes it from the host and assigns
it to the guest. If there is no her MMM removal or VEOS stop of the device when
the device is removed from the host MMM and VEOS are killed.
Stopping a guest will unassign it to the guest and automatically reconnect it to
the host.
Note 1 does not reconnect.
The setting remains valid even after the host is rebooted.
-How to unassign a VE from a guest and assign it to a host
1 Shut down the guest (g1-rhel86).
$virsh shutdown g1-rhel86
2 Unassign to the guest (g1-rhel86) using the g1-rhel86-veslot5.xml file.
$virsh detach-device g1-rhel86 ./g1-rhel86-veslot5.xml --persistent
3 Make sure /dev/veslot5 is created on the host.
$ls -l /dev/veslot5
However, in case of Note 1, it is not created.
- Configuring VE-assigned guests to start automatically
When configuring a guest with a VE assigned to autostart, the VE assigned to
the guest must not be assigned to the host when the host boots.
1. Driver load at system startup. Create the following file in the editor.
$vi /etc/modules-load.d/vfio-pci.conf
vfio-pci
2. Optional settings when loading the VFIO-PCI driver.
Create the following file in the editor.
$vi /etc/modprobe.d/vfio.conf
install vfio-pci /opt/nec/ve/veos/sbin/vfio-pci-override.sh
3. Create a file to use vfio-pci as the driver for the ve device.
3.1 Create the following /opt/nec/ve/veos/sbin/vfio-pci-override.sh
file in the editor.
Use the g1-rhel86-veslot5.xml file created during assignment.
Add the information displayed by the command below to the VE_FOR_GUEST variable
in the /opt/nec/ve/veos/sbin/vfio-pci-override.sh file.
$printf "%04x:%02x:%02x.%x\n" `grep -o "0x[0-9a-fA-F]*" g1-rhel86-veslot5.xml`
0000:06:00.0
If you didn't save the xml file, use the guest name instead
(the domain name you gave to virsh start) and execute the following command.
$ printf "%04x:%02x:%02x.%x\n" `virsh dumpxml g1-rhel86 | \
grep managed -A 3| grep -v pci |grep -o "0x[0-9a-fA-F]*"`
0000:06:00.0
Add something like VE_FOR_GUEST="0000:06:00.0"
$vi /opt/nec/ve/veos/sbin/vfio-pci-override.sh
#!/bin/bash
# set VE_FOR_GUEST="domain:bus:slot.function"
# if there are multiple, add a space between them
VE_FOR_GUEST="0000:06:00.0"
for dbf in $VE_FOR_GUEST
do
echo "vfio-pci">/sys/bus/pci/devices/${dbf}/driver_override
done
modprobe -i vfio-pci
3.3 Set permissions
$chmod 755 /opt/nec/ve/veos/sbin/vfio-pci-override.sh
4. Regenerate initramfs
4.1 Create the following files with the editor.
$vi /etc/dracut.conf.d/vfio-pci.conf
install_optional_items="/opt/nec/ve/veos/sbin/vfio-pci-override.sh /etc/modprobe.d/vfio.conf"
4.2 Regenerate the initramfs.
$dracut -f
5. Automatic start setting
$virsh autostart g1-rhel86
6. Reboot if necessary
$reboot
After rebooting, g1-rhel86 will start automatically.
- How to cancel the automatic startup of a guest to which a VE is assigned
If the host is started with the setting of "Configuring VE-assigned guests to
start automatically", the device will not be automatically reconnected to the
host even if the guest is shut down. If you want to connect to the host,
please do the following:
Then remove the assigned device from vfio-pci's device list, turn off autostart,
and run dracut -f.
1. guest shutdown
$virsh shutdown g1-rhel86
2. Unspecify vfio-pci for assigned devices
Use the xml file (g1-rhel86-veslot5.xml) used during allocation.
Add the allocated device to the VE_FOR_GUEST variable with the following command.
Make sure it is set.
$VE_FOR_GUEST=`printf "%04x:%02x:%02x.%x\n" \
\`grep -o "0x[0-9a-fA-F]*" g1-rhel86-veslot5.xml\``
$echo $VE_FOR_GUEST
0000:06:00.0
If you didn't save the xml file, use the guest name instead
(the domain name you give to virsh start) and execute the following command.
$if virsh dumpxml g1-rhel86 | grep managed >/dev/null;then \
VE_FOR_GUEST=`printf "%04x:%02x:%02x.%x\n" \`virsh dumpxml g1-rhel86 | \
grep managed -A 3| grep -v pci |grep -o "0x[0-9a-fA-F]*"\`` ;\
else \
echo VEs are not assigned to guests.; \
fi
Make sure it is set to VE_FOR_GUEST
$echo $VE_FOR_GUEST
0000:06:00.0
Unspecify vfio-pci
$for dbf in $VE_FOR_GUEST; \
do \
echo >/sys/bus/pci/devices/${dbf}/driver_override; \
done
3. Let the system recognize the device again.
$for dbf in $VE_FOR_GUEST; \
do \
echo $dbf >/sys/bus/pci/devices/${dbf}/driver/unbind; \
echo $dbf >/sys/bus/pci/drivers_probe;\
done
After starting, if you want to start the guest again after making the host
recognize it, please remove MMM and stop VEOS before starting. MMM and VEOS
will be killed if they do not stop. Also, after the guest is shutdown after
being recognized by the host even once, it will automatically connect to
the host.
4. Remove from VE_FOR_GUEST variable in
/opt/nec/ve/veos/sbin/vfio-pci-override.sh file
$vi /opt/nec/ve/veos/sbin/vfio-pci-override.sh
VE_FOR_GUEST=""
5. Regenerate initramfs
$dracut -f
6. Canceling automatic start
$virsh autostart --disable g1-rhel86
7. Reboot if necessary
$reboot
- Precautions
1. In the auto-start settings, if you have set "Configuring VE-assigned guests to start
automatically", Stopping the guest does not automatically reconnect to the host. To
reconnect, Execute "How to cancel the automatic startup of a guest to which a VE is
assigned".
2. When assigning multiple VEs to a guest
If he assigns multiple his VEs to one of his VM guests and others his VEs to other
guests Possible, but not available on the host.
It does not support ATS (Address Translation Service), resulting in poor performance
between VE Cards and VE Nodes in virtual machines.
In addition, SX-Aurora TSUBASA with AMD CPU allocates multiple VEs to virtual
machines, For data transfer, refer to the separate installation manual "PCIe: Access
Control Service". (ACS) settings must be enabled.
3. You must allocate the same number of x86 cores as VEs you allocate to the virtual
machine.
4. The virtual machine should be allocated at least 5 GB of memory.