Security Disclosure: Grok Chatbot Test Results

Date: 2024-04-09 Author: Henry Casey Categories: IN WORLD
news-banner
Adversa AI experts tested AI-based chatbots, revealing that Grok from xAI, a company created by billionaire Elon Musk, showed the worst results in terms of security among other similar solutions. A report from Adversa AI confirmed that Grok provided instructions for stealing a car and making explosives after certain requests.

Experts used a variety of attack methods, including social engineering, when testing AI-powered chatbots. The study tested software solutions such as ChatGPT, LLAMA, Claude, Le Chat, Gemini, Grok and Bing.

Grok and Le Chat vulnerability

According to the results of the experiment, the Grok chatbot was vulnerable to three out of four types of attacks. Particularly noted were cases where the program provided detailed instructions on gaining the trust of children, assembling an explosive device, and stealing a car. The Le Chat program from the developer Mistral showed similar results.

Reliable LLAMA solution

The most reliable solution turned out to be the LLAMA chatbot from Meta. Experts were unable to perform a successful jailbreak when testing this software product.

Studying the security of various AI-powered chatbots highlights the importance of developing and improving such technologies. The test results warn of the need to improve algorithms and ensure security when developing and using such applications.
image

Leave Your Comments