ChatGPT: A Powerful Contract Writer, Not a Security Auditor
A recent study conducted by Salus Security, a blockchain security company, has shed light on the capabilities of GPT-4, an artificial intelligence system, in analyzing and evaluating smart contracts. While GPT-4 proves to be proficient in generating and analyzing code, it falls short when it comes to acting as a security auditor. The research findings indicate that although GPT-4 can be helpful in code parsing and identifying potential vulnerabilities, it cannot replace the expertise of professional auditors and dedicated auditing tools.
In order to determine GPT-4’s ability to detect security weaknesses, the Salus researchers utilized a dataset containing 35 smart contracts, which included a total of 732 vulnerabilities. They focused on seven common types of vulnerabilities and found that GPT-4 showed promise in identifying true positives, which are actual vulnerabilities of significant concern. The system achieved more than 80% precision during testing. GPT-4 struggled with generating false negatives, indicating a low recall rate of only 11%.
The recall rate, a key statistical measure, represents the system’s ability to identify vulnerabilities accurately. With an accuracy rate of only 33%, GPT-4’s vulnerability detection capabilities were deemed insufficient. Therefore, until AI systems like GPT-4 are improved, the researchers recommend utilizing traditional auditing methods and relying on the expertise of experienced auditors.
While GPT-4 can serve as a valuable tool in assisting with the auditing of smart contracts, particularly in code parsing and vulnerability indicators, it is not reliable enough to be solely depended upon for security auditing. Combining GPT-4 with other auditing approaches and tools is essential to enhance the overall accuracy and efficiency of the audit process. Until technological advancements are made, the human element and dedicated auditing tools remain crucial in ensuring the security of smart contracts.
5 thoughts on “ChatGPT: A Powerful Contract Writer, Not a Security Auditor”
Leave a Reply
You must be logged in to post a comment.
For all the hype, GPT-4’s proficiency in code parsing doesn’t outweigh its limitations in security auditing.
Though GPT-4’s accuracy rate may be low, it’s a reminder that AI is a continually evolving field. With further advancements and fine-tuning, it has the potential to become a more reliable security auditor for smart contracts in the future!
This study emphasizes the need for a collaborative approach to security auditing. 🤝 GPT-4 can play a valuable role alongside human auditors and dedicated tools, enhancing the overall accuracy and efficiency of the process. 🔄
It’s interesting to see that GPT-4 struggled with generating false negatives. Maybe future versions of AI systems can improve on that. Nonetheless, the findings emphasize the importance of experienced auditors to ensure thorough security checks.
Only 11% recall rate? That’s pretty abysmal for an AI system. 🙄