Cybersecurity firm Forta predicted $200 million Euler Finance hack using AI


Advances in artificial intelligence are a double-edged sword for cybersecurity companies involved in decentralized finance.

Forta Network, a Web3 cybersecurity company, has invested in Polygon, compound, Lido, Alland cryptocurrency lending platform Euler Finance. lost $200 million In a cyberattack that Forta saw last month.

“Many machine learning models for Euler attacks have been detected. [it] Even before the funds were stolen, he basically gave the Euler team a few minutes of warning: “Your protocol is about to be attacked. Something needs to be done.” ” Decryption.

“Blockchain is a great fit for these machine learning approaches because the data is public,” explains Seifert. “We can see every transaction, every account, and see how much money is actually lost, which bodes well for training some of these models.”

Despite the fact that the Forta system became aware of malicious activity on Euler’s blockchain protocol and sent an alert to Euler, the company was quick enough to shut down the network before the funds were stolen. Couldn’t act on But after negotiations with the hackers, the customer is complete.

“All recoverable funds taken from the Euler Protocol on March 13 are now successfully returned By the exploiters,” reads a post shared by Euler’s official Twitter account.

“There were three critical Forta alerts prior to being exploited,” Forta said. blog post“Unfortunately, in this case, [Euler] The attack was still too fast for multisig’s standard manual response to suspend the contract. ”

After 15 years at Microsoft, Seifert joined Forta in April 2022 as Principal Group Manager, overseeing the tech giant’s cybersecurity and threat detection team. Folta $23 million raised by Andreessen Horowitz, Coinbase Ventures, Blockchain Capital and others to launch in 2021.

While Forta can leverage its own machine learning to identify malicious activity on the blockchain, Seifert believes the potential for manipulating ChatGPT has AI drawbacks. ChatGPT is a chatbot developed by his OpenAI, where he received a $10 billion investment from his previous employer.

“There [are] It’s two sides of the coin,” says Seifert. “I believe many AI technologies can be used to create more customized and compelling social engineering attacks.

“You can feed ChatGPT with your LinkedIn profile and ask them to compose an email inviting you to click on that link. It will be highly customized,” he explained. “So I think that if some of these models are abused, we’ll see higher click-through rates.”

“On the positive side, machine learning is an integral part of threat detection,” says Seifert.

A report from Immunefi earlier this month found hacks in the crypto industry 192% increase Year-over-year, the quarter went from 25 to 73.Another Serious Crypto Hack Costs Him $10 Million On Ethereum Stolen From December.

Scott Gralnick is Director of Channel Partnerships. Halbornis a $90 million-funded blockchain cybersecurity company whose clients include Solana and Dapper Labs.

“New technology always creates a double-edged sword,” Guralnick said. “Just as people are using AI to try new attack vectors, white hat hackers are using this technology to augment their arsenal of tools to protect these companies and ecosystems. By doing so, we seek to ethically protect the entire ecosystem.”

Microsoft recently launched security co-pilot, A chat service that allows cybersecurity personnel to ask questions related to security incidents, receive AI-generated answers, and receive step-by-step instructions on how to mitigate risk. Seifert hopes cybersecurity employees will take advantage of her AI language model to essentially undermine the protocol.

“What’s new now is these large language models that understand context very well. They understand code very well,” says Seifert. “I think it opens the door primarily to incident responders.

“If you think about an incident responder facing alerts and transactions in the web3 space, they may not know what to look at. You can translate technical data into natural language, making it more accessible to a wider audience?” he asked. “Can the person ask natural language questions to guide the investigation?”

A recent Pew Research survey of 11,004 U.S. adults found that 32% of Americans expect artificial intelligence in the next 20 years. Mainly Negative Impact Meanwhile, only 13% said AI would help more than harm their workforce.

Earl Seifert of the minority.

“One of the things people talk about all the time is, ‘Oh, will AI replace humans?’ I don’t think so,” he says. “I think AI is a tool that can augment and support humans, but humans must always be involved to make some of these decisions.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *