Nvidia Optimus
'Optimus technology' is a software [and possibly hardware] solution for automatically switching between an integrated graphics chip or IGP (such as on onboard intel chip) and a more powerful [nvidia] graphics chip. This technology is intended specifically for laptops. The precursor to this technology was 'switchable graphics,' in which the user could manually switch between the graphics card. It may require that the Nvidia GPU has the PCOPY engine.
The graphics system in a laptop has a GPU with some memory, in the case of an IGP, this memory may be a piece of system memory, but otherwise it is usually dedicated memory living on the GPU. This GPU connects to a laptop display, or output port. There are two main problems to solve in order to support optimus under linux:
1) Currently we do not have a way to a priori know what outputs (displays) are connected to what GPU's.
2) The supposed optimus software, should perform the task of switching between the which of the two graphics processors drives your display. Ideally this would be done by directly flipping a hardware switch, called a mux (multiplexer). However such a mux does not always exist!
If a hardware mux does not exist, there there is no physical way to perform this GPU switching. Thus Optimus is used to effectively "implement" a software mux. Specifically it ensures that relevant data is sent to and processed on the right GPU then the data needed for display is copied to the device that displays the image.
When it comes to how a specific machine is configured, there are a number of possibilities. Again, if the hardware mux exists it would be used to select which GPU drives the internal panel, or the external monitor, or possibly both. It is also possible, that a GPU is hardwired to the internal panel, so the other GPU cannot possibly drive the internal panel. The same goes for external monitor output. In the worst case we would have the Intel GPU hardwired to the internal panel and the Nvidia GPU hardwired to the external output! The best case scenario is a mux, which can select which GPU drivers control which outputs.
Basically, you can have any combination of these possibilities. There is no standard on how to wire things. There should be ways to detect the wirings and whether there is a mux and where, but the documentation is not available to the developers (maybe you can help us figure out how to do this, have any ideas? You can also 'petition' nvidia for releasing these specs: nvidia customer help ? )
Switcheroo - Using one card at a time
If your laptop has a hardware mux, the kernel switcheroo driver may be able to set the wanted GPU at boot. There are also hacks based on the switcheroo, like asus-switcheroo, but they offer no extra value. If one of the hacks happens to work, and the switcheroo does not, the switcheroo has a bug. There might already be pending patches waiting to go towards the mainline kernel.
In all other cases, you are stuck with what happens to work by default. No switching, no framebuffer copying. Yet.
Using Optimus/Prime
'PRIME GPU offloading' and 'Reverse PRIME' is an attempt to support muxless hybrid graphics in the Linux kernel. It requires:
DRI2
Setup
- An updated graphic stack (Kernel, xserver and mesa).
- KMS drivers for both GPUs loaded.
- DDX drivers for both GPUs loaded.
If everything went well, xrandr --listproviders should list two providers. In my case, this gives:
$ xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x8a cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 2 outputs: 2 associated providers: 1 name:Intel
Provider 1: id: 0x66 cap: 0x7, Source Output, Sink Output, Source Offload crtcs: 2 outputs: 5 associated providers: 1 name:nouveau
Offloading 3D
It is then important to tell Prime what card should be used for offloading. In my case, I would like to use Nouveau for offloading the Intel card:
$ xrandr --setprovideroffloadsink nouveau Intel
When this is done, it becomes very easy to select which card should be used. If you want to offload an application to a GPU, use DRI_PRIME=1. When the application is launched, it will use the second card to do its rendering. If you want to use the "regular" GPU, set DRI_PRIME to 0 or omit it. The behaviour can be seen in the following example:
$ DRI_PRIME=0 glxinfo | grep "OpenGL vendor string"
OpenGL vendor string: Intel Open Source Technology Center
$ DRI_PRIME=1 glxinfo | grep "OpenGL vendor string"
OpenGL vendor string: nouveau
Using outputs on discrete GPU
If the second GPU has outputs that aren't accessible by the primary GPU, you can use "Reverse PRIME" to make use of them. This will involve using the primary GPU to render the images, and then pass them off to the secondary GPU. In the scenario above, you would do
$ xrandr --setprovideroutputsource nouveau Intel
When this is done, the nvidia card's outputs should be available in xrandr, and you could do something like
$ xrandr --output HDMI-1 --auto --above LVDS1
in order to add a second screen that is hosted by the nvidia card.
DRI3
Setup
The implementation of DRI3 aims for a more convenient way to use a PRIME setup. It requires some additional setup steps:
- A Kernel version 3.17 or newer with Render-Nodes - 3.16 only works when booting with drm.rnodes=1.
- XServer 1.16 with DRI3 support.
- Mesa 10.3 with DRI3 support.
- KMS drivers for both GPUs loaded.
- DDX drivers for the primary GPU loaded.
Attention: Render-Nodes requires the user to be in the "video" group
If everything went well, offloading to the secondary GPU is done with DRI_PRIME=1:
$ DRI_PRIME=0 glxinfo | grep "OpenGL vendor string"
OpenGL vendor string: Intel Open Source Technology Center
$ DRI_PRIME=1 glxinfo | grep "OpenGL vendor string"
OpenGL vendor string: nouveau
Power management
When an application is using 'PRIME GPU offloading', both the discrete and the integrated GPUs are active and aside from optimizations at the driver level, nothing else can be done. However, when no application is making use of the discrete GPU, the default behaviour should be for the card to automatically power down entirely after 5 seconds. Note that using an output on the discrete GPU will force it to stay on.
This dynamic power management feature has been added in Linux 3.12 but requires Linux 3.13 to work properly with Nouveau. If you cannot make use of this feature and do not mind not using your NVIDIA GPU, it is recommended to blacklist the 'nouveau' module and to use bbswitch to turn off the NVIDIA GPU. Look onto your distribution's wiki for more information.
Checking the current power state
You can query the current power state and policy by running as root:
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:DIS: :DynOff:0000:01:00.0
1:IGD:+:Pwr:0000:00:02.0
2:DIS-Audio: :Off:0000:01:00.1
Each line of the output is of the following format:
- A number: not important
- A string:
- DIS: Discrete GPU (your AMD or NVIDIA GPU)
- IGD: Integrated Graphics (your Intel card?)
- DIS-Audio: The audio device exported by your discrete GPU for HDMI sound playback
- A sign:
- '+': This device is connected to graphics connectors
- ' ': This device is not connected to graphics connectors
- A power state:
- OFF: The device is powered off
- ON: The device is powered on
- DynOff: The device is currently powered off but will be powered on when needed
- DynPwr: The device is currently powered on but will be powered off when not needed
- The PCI-ID of the device
Forcing the power state of the devices
Turn on the GPU that is not currently driving the outputs:
echo ON > /sys/kernel/debug/vgaswitcheroo/switch
Turn off the GPU that is not currently driving the outputs:
echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
Connect the graphics connectors to the integrated GPU:
echo IGD > /sys/kernel/debug/vgaswitcheroo/switch
Connect the graphics connectors to the discrete GPU:
echo DIS > /sys/kernel/debug/vgaswitcheroo/switch
Prepare a switch to the integrated GPU to occur when the X server gets restarted:
echo DIGD > /sys/kernel/debug/vgaswitcheroo/switch
Prepare a switch to the discrete GPU to occur when the X server gets restarted:
echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
Known issues
Everything seems to work but the output is black
This is only a problem with DRI2
Try using a re-parenting compositor. Those compositors usually provide 3D effects.
WARNING: Currently, Kwin only works when using the desktop effects. In the case where the window would be pure black, please try minimizing/maximizing or redimensioning the window. This bug is being investigated.
Poor performance when using the Nouveau card
Right now, Nouveau does not support reclocking and other power management feature. This cripples the performance of the GPU a lot along with increasing the power consumption compared to the proprietary driver.
Using Prime with Nouveau may not result in any performance gain right now, but it should in a not-so-distant future.
Discrete card will not switch off
Some secondary GPUs report that they have the VGA port enabled. This has been seen on the GeForce GT 520M.
Symptoms include higher drain on the battery, fans constantly running, Xorg attempting to treat the discrete card as a secondary monitor.
This issue can be detected by opening a console and typing xrandr
:
[root@localhost ~]# xrandr
Screen 0: minimum 8 x 8, current 2390 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 293mm x 164mm
1366x768 60.02*+
1024x768 60.00
800x600 60.32 56.25
640x480 59.94
DP1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
VGA1 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
VGA-1-2 connected 1024x768+1366+0 (normal left inverted right x axis y axis) 0mm x 0mm
1024x768 60.00*
800x600 60.32 56.25
848x480 60.00
640x480 59.94
1024x768 (0x63) 65.000MHz
h: width 1024 start 1048 end 1184 total 1344 skew 0 clock 48.36KHz
v: height 768 start 771 end 777 total 806 clock 60.00Hz
800x600 (0x64) 40.000MHz
h: width 800 start 840 end 968 total 1056 skew 0 clock 37.88KHz
v: height 600 start 601 end 605 total 628 clock 60.32Hz
800x600 (0x65) 36.000MHz
h: width 800 start 824 end 896 total 1024 skew 0 clock 35.16KHz
v: height 600 start 601 end 603 total 625 clock 56.25Hz
In this instance you can see there is an LVDS1 connector which is the panel connected to the iGPU. We also have a VGA1 which is the VGA port on the laptop, and VGA-1-2. This is the VGA port that the discrete GPU claims it has, but in reality doesn't.
To get your machine working to its optimum we need to disable that VGA-1-2 port. This has to be accomplished by adding video=VGA-2:d
to the kernel command line. (Note the -2 corresponds to the last -2 in the connector name reported by xrandr.)
Be aware that in some instances this could mean your external VGA port may no-longer work. This is because it is possible that the discrete GPU really is connected directly to the VGA port on your laptop.