?driverid = 1YCT8)NVIDIA控制面板提供了使用vGPU驱动程序启用多个监视器的选项,但只有一个“监视器”显示为可用。如何从GRID vGPU中获得第二个“监视器”/虚拟显示器头?以上
2018-09-10 17:14
当您登录时冻结并在终端设备上获得垂直和水平滚动条时,我们在随机使用2个屏幕时会出现问题。 Xendesktop会话永远不会启动唯一的解决方案是从Delivery控件中终止它并让用户再次登录。这只影响多显示器用户,单屏用户工作得很好。我已经尝试将分辨率设置为显示器上的最低分辨率作为测试,但仍然存在相同的问题。以下环境信息。戴尔R730服务器服务器中安装了2张K1卡管理程序:Vsphere 6.0所有更新截至2016年1月6日Citrix Xendesktop 7.6具有最新的VDACitrix PVS 7.6来自Nvidia的驱动程序版本:354.56虚拟机2008R22 VCPU4GB内存配置文件120Q(已经尝试过所有其他配置文件,同样的问题如果我从VM卸载Nvidia驱动程序,即使我将卡连接到虚拟机,问题也会消失。重新安装驱动程序问题回来了有什么想法吗?以上来自于谷歌翻译以下为原文We have an issue when using 2 screens in full screen mode randomly when you log in it freezes and you get vertical and horizontal scrolls bars on your endpoint device. The Xendesktop session never launches the only solution is to kill it from the Delivery control and have the user login again. This only affects multi-monitor users, single screen users work just fine. I have tried setting the resolution to the lowest on the monitors as a test and still have the same issue. Environment information below.Dell R730 Server2 K1 cards installed in serverHypervisor: Vsphere 6.0 All updates as of 1/6/2016Citrix Xendesktop 7.6 with most current VDACitrix PVS 7.6Driver version from Nvidia: 354.56Virtual machines2008R22 VCPU4GB MemoryProfile 120Q (Have tried with all the other profiles as well, same issueIf I uninstall the Nvidia driver from the VM the issue goes away even if I leave the card attached to the virtual machine. Reinstalling the driver the issue comes backAny thoughts?
2018-09-25 15:00
我有一台带有K1板的戴尔R720,我正在Vmware View 6中测试vDGA。我的K1只会给我两个选项中的一个,或者将所有GPU分配给PCIe passthrough或者不分配。不确定这是不是这样。但是我的问题在于,当我将PCIe直通视频卡分配给VM时,第一个将启动正常,并且所有后续VM将拒绝启动并显示错误:设备8:0.0已在使用中。VM 1分配为7:0.0VM2分配为8:0.0我尝试将vm2移动到9:0.0和A:0.0,结果相同,在任何给定时间只能运行1 vm。有没有其他人有这个问题,并能够阐明它?以上来自于谷歌翻译以下为原文I have a Dell R720 with a K1 board in it that I am testing out vDGA in Vmware View 6.My K1 will only give me one of 2 options, either assign all GPUs to PCIe passthrough or none.Not sure if that is the way it is or not.However my problem lies in that when I assign the PCIe passthrough video cards to a VM, the first one will boot fine, and all subsequent VMs will refuse to start and display the error:Device 8:0.0 is already in use.VM 1 is assigned to 7:0.0VM2 is assigned to 8:0.0I have tried moving vm2 to 9:0.0 and A:0.0 with the same results, only 1 vm can operate at any given time.Has anyone else had this problem and able to shed some light on it?
2018-09-26 15:29