On November 30, 2022, ChatGPT quaked the digital world, sending a tremor that even rattled the cybersecurity industry. Instead of responding in panic, a more sensible approach is to begin learning how to leverage the technology to streamline your workflow and optimize your skills.
In this post, we explain how ChatGPT can be used to improve your cybersecurity posture and data breach resilience.
Your cybersecurity posture reflects your level of cyber threat resilience – the higher your cybersecurity posture, the higher the chances of withstanding a potential data breach. There are multiple factors feeding this security status, including people, hardware, policies, technology service providers, etc.
This post focuses on the application of ChatGPT across three areas:
Because cyber threat awareness training usually involves lengthy video presentations, this essential cybersecurity requirement often gets pushed behind more urgent work commitments, which increases an organization's risk of suffering a data breach.
With the support of ChatGPT, cybersecurity lessons taught by b-grade actors in cringy data breach scenarios can be replaced with more frequent practical experience - the best type of teacher.
At the heart of this strategy is a simulated phishing message generated in ChatGPT. Here's an example:
Prompt:
A prompt simulating a phishing email, where employees are asked to identify what looks suspicious about the email, such as unusual sender address or typos, and to report the email if they suspect it's a phishing attempt.
Output:
"Dear [employee],
We have noticed that there has been some suspicious activity in your account, and we want to ensure that your account is secure. Please click the link below to verify your identity and secure your account.
[Phishing link]
Thank you,
The IT Department"
Other variations can be generated with the following prompts:
By plugging this message into an email marketing solution and sending it to all staff, you've got yourself a simple and cost-effective internal phishing resilience program.
With the support of ChatGPT, the resources involved in simulated phishing attacks are significantly reduced, opening up the more cost-effective option of managing phishing exercises internally rather than outsourcing to a third party.
For this exercise to be effective, it should appear like it's coming from another source and not your organization. Not all email marketing solutions have the option of modifying the sender's name. If yours doesn't, a cost-effective solution that does is the Gmail Extension Yet Another Mail Merge (YAMM). YAMM is ISO 27001 certified and GDPR compliant.
Here's how to use YAMM and ChatGPT for phishing training exercises (you will need a Gmail or Google workspace account to do this).
Important: The process outlined below is for educational purposes only and should never be used for malicious purposes.
The free YAMM tier only tracks email opens and not email clicks. Because email clicks are a greater indication of susceptibility to phishing compromise, a premium package should be purchased
YAMM's pricing is only $50 per year for an individual user with a quota of 1,500 emails daily. If you wish to manage this phishing exercise program with multiple users, the price jumps to $240 a year for ten users, which is still a considerably less investment than outsourcing anti-phishing training to a third party.


After signing up for a package, follow the user workflow to install the Gmail extension.
Using the prompts outlined above for guidance, generate a simulated phishing attack in ChatGPT. Compose a new message in Gmail and paste this simulated phishing email into it.

Replace the "phishing link" placeholder with the link of your choice. It's not too important where this link directs the user to. The point of this exercise is to capture link clicks. To prevent staff from reading the link URL and realizing this is just a training exercise, its best to insert the link as a hyperlink in the call to action section of the email, like so:

Change the employee field to follow this convention {{employee}}. This will allow you to personalize each email.

Insert a subject. If you can't think of one, ask ChatGPT for help.

With the recipient field left blank, click the X in the composer window to close it.

Open a new Google worksheet, and give it a title. Change the sheet's name to reflect the month of the test or period in accordance with your testing frequency.
Add two columns, one named "employee" and the other "email."

Now add the names and emails of all the recipients in your workplace.

To create the email campaign, navigate to Tools > Yet Another Mail Merge > Start Mail Merge.
Enter a sendee name and select the email draft you saved in the "Email Template" field.

To test the campaign, click "Send test email." This will send a test email to yourself. Once you're ready, hit "Send."
To track email opens and clicks, navigate to Tools > Yet Another Mail Merge > Open Tracking Report. You will then see which recipients opened your email and which clicked on its link in the "Merge Status" column.

To send a new campaign, open a new sheet and repeat the steps above. It's best to keep a record of all previous phishing tests to track improvement.
This workflow isn't contingent on the use of ChatGPT, it doesn't take much effort to compose a phishing email yourself. The benefit of using ChatGPT is the speed at which the solution can produce simulated phishing emails, enabling you to run these exercises more often to increase your organization's phishing attack resilience and, therefore, its cybersecurity posture.
Social media threat hunting is the process of scanning social media data to detect potential cyber threats. The process involves scanning social media feeds for specific keywords indicative of sensitive data exposure or phishing attacks.
To use ChatGPT for social media threat hunting, you must first collect data from the social media platforms using scraping tools, like Hootsuite Insights, Brandwatch, or Talkwalker, and feed that data into ChatGPT for analysis through its API.
Learn more about the OpenAI API >
Here are some examples of ChatGPT prompts that can be used for social media threat monitoring:
"Find all social media posts mentioning the organization's name in the past 24 hours and determine if any contain suspicious keywords related to cybersecurity threats, such as 'malware' or 'phishing.'"
"Identify all social media accounts that have mentioned the organization's name in the past week and determine if any are known to be associated with malicious activity, such as known hacking groups or phishing campaigns."
"Scan all social media posts from the organization's official accounts and determine if any contain links to suspicious domains or websites."
"Identify all social media accounts that have mentioned the organization's name and determine if any have a history of suspicious activity, such as frequently posting links to malicious websites or engaging in social engineering attacks."
"Scan all social media posts for mentions of the organization's name and determine if any contain suspicious keywords related to cybersecurity threats, such as 'ransomware' or 'data breach,' and flag them for further investigation."
To perform these tasks, ChatGPT can be trained on specific keywords based on your organization's classification of sensitive data - similar to the process used in data leak detection.
These keywords don't necessarily need to link to sensitive data. Asking ChatGPT to scan social media feeds for mentions of your company name will help you better understand your online presence and any potential security risks associated with this exposure.
The third-party threat landscape has a significant impact on your cybersecurity posture; almost 60% of data breaches are caused by compromised third-party vendors. Security questionnaires are one of the most effective methods of identifying complex security risks across your vendor network. Rather than designing questionnaires yourself from the ground up, ChatGPT can be used to draft security questionnaires based on specified requirements.
The word "draft" here is key, as generated questionnaire will most likely require tweaking to increase accuracy. ChatGPT is simply a tool for improving process efficiency, so it should be regarded as more of an assistant than an employee.
Here are some example prompts and outputs:
Prompt:
Generate an ISO27001 questionnaire
Output:
Prompt:
Keep going.
Output:
A custom questionnaire can be generated by helping ChatGPT understand the unique context of the environment that will be assessed.
Prompt:
I want you to design a custom security questionnaire for a vendor that processes customer invoice data, which includes sensitive financial information. This vendor has had a history of data exposures through leaky AWS S3 buckets. I want this questionnaire to be tailored to the unique security risks of this vendor.
Output:
As you can see, these outputs are not perfect. The questions addressing leaky bucket risks are very high-level, and the entire questionnaire would need to be modified before it's sent to the vendor. Usually, ChatGPT needs to be guided towards improved accuracy with follow-up prompts refining each output. Providing an example is the best way to improve accuracy.
Prompt:
This vendor uses S3 to host its data. I want this questionnaire to address whether the vendor's S3 bucket ACLs are configured correctly.
Important: Currently, ChatGPT has been trained with data sets up to the year 2021, so any recent additions to the field of cybersecurity will not be reflected in the tool's initial outputs. When this happens, additional prompts are required to educate the solution on these changes. Because of this, ChatGPT should only be used by cybersecurity professionals that understand how to shape outputs into greater accuracy.
Let's improve the accuracy of this output.
Prompt:
This vendor uses S3 to host its data. I want this questionnaire to address whether the vendor's S3 bucket ACLs are configured correctly.
These modified questions are now a little more specific:
Augmenting ChatGPT into your workflow doesn't remove all administrative duties. Prompts still need to be manually submitted and continuously refined to steer the tool toward a desired output. Still, even with this necessary manual component, ChatGPT demonstrates the incredible potential for streamlining processes and reducing costs, especially in the area of cyber threat awareness training.
ChatGPT is unlikely to replace a cybersecurity professional in the foreseeable future, but to increase their value in this fast-evolving age of AI, cybersecurity professionals should master the process of submitting prompts to achieve a desired output, a new field of programming known as "prompt engineering."