First April Fool’s joke, now real: Nvidia G-Assist launches today – if you have a GPU with at least 12 GB VRAM

0
4

At the last Computex, Nvidia announced AI support called “G-Assist”. Now the first version of the feature is available.

Almost exactly seven years to the day after the original April Fool’s joke and around nine months after the actual announcement, Nvidia G-Assist is now part of the GeForce portfolio.

As the developers announce in a , the AI assistant is included in update 11.0.3.218 of the Nvidia app, which has been replacing Geforce Experience since the end of last year.

  • In its initial version, Nvidia G-Assist is, to a certain extent, an AI chatbot: you can use the usual text-based prompts to ask questions about your hardware and how to optimize it.
  • As an example, the analysis of performance during gameplay is mentioned. Here you should also be able to instruct G-Assist to adjust your GPU clock to reduce frame rate drops or optimize the game settings.

A complete list of the over 70 prompts supported by 0.1 of G-Assist can be found on the official Nvidia website The originally announced in-game help, for example when you get stuck in quests, is not yet included in the assistance.

Nvidia G-Assist: System Requirements

To use the G-Assist features, you need to have a GeForce graphics card with at least 12 GB of video memory, among other things. The following GPUs are compatible at the time of this writing:

  • RTX 3060 12 GB, RTX 3080 12 GB, RTX 3080 Ti, RTX 3090 (Ti)
  • RTX 4060 Ti 16 GByte, RTX 4070 (Super), RTX 4070 Ti (Super), RTX 4080 (Super), RTX 4090
  • RTX 5070, RTX 5070 Ti, RTX 5080, RTX 5090

In addition to the latest app update, you need the latest GeForce driver 572.83 and 6.5 GB of hard disk space for the text-based wizard; a further 3 GB is required for voice control.

Also note that G-Assist is currently only available in English.

Nvidia G-Assist: Local AI based on LLama

However, an internet connection is not necessary, as Nvidia further explains. Instead, G-Assist acts as a locally running AI on your RTX graphics card.

  • According to Nvidia, this is made possible by a llama-based instruction model that, with its eight billion parameters, only takes up a fraction of today’s large AI models.
  • This also explains the hunger for memory: when G-Assist receives a text input, “the Nvidia GPU allocates some of its horsepower for AI inference”. There may be brief stutters while the assistant searches for the appropriate response.

As version 0.1, G-Assist is of course still in the early stages of its functionality. However, this is not only being expanded by Nvidia: the developers have published a GitHub repository that provides examples and instructions.

Community developers can use this to “define functions in simple JSON scripts”; in addition, entire plug-ins can be submitted to Nvidia for possible integration.

As an example, the developers mention a Twitch plug-in that can tell you whether your favorite channel is currently live and provide the associated stream information – or whether it is offline.