platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.
First slide label
Some representative placeholder content for the first slide.
Second slide label
Some representative placeholder content for the second slide.
Third slide label
Some representative placeholder content for the third slide.
Using ChatGPT to Deobfuscate Malicious Scripts, (Wed, Mar 13th)
published on 2024-03-13 08:26:17 UTC by Content:
Today, most of the malicious scripts in the wild are heavily obfuscated. Obfuscation is key to slow down the security analyst's job and to bypass simple security controls. They are many techniques available. Most of the time, your trained eyes can spot them in a few seconds but it remains a pain to process manually. How to handle them? For soe of them, you have tools like numbers-to-strings.py[1], developed by Didier, to convert classic encodings back to strings. Sometimes, you can write your own script (time consuming) or use a Cyberchef recipe. To speed up the analysis, why not ask some help to AI tools? Let's see a practical example with ChatGPT.