Securing your human layer with UpGuard's new User Risk product
Speaker: Michael Tan, Senior Product Marketing Manager, User Risk
It's a well-known fact that humans are often the weakest link in cybersecurity. Current solutions like annual awareness training often feel like a box-checking exercise with no real way to measure impact.
To address this, UpGuard announced User Risk, a new product that unifies identity, behavior, and threat signals to secure your workforce from a single platform.
Key features & capabilities:
True visibility: User Risk connects to the tools you already use, like Microsoft Entra and Google Workspace, to break down data silos. It automatically scans for and detects Shadow AI apps, which employees are using them, and the permissions they’ve been granted.
AI-driven prioritization: An AI analyst automatically scores and ranks each risk, giving your team a clear, prioritized action plan to focus on the teams and individuals who need immediate attention.
Build a security-first culture: The product delivers contextual nudges directly within an employee's workflow, for instance, guiding them to use an approved AI tool instead of an unauthorized one. Every employee also receives their own security score to see how their actions impact their posture.
Most breaches involve a human element, yet traditional solutions like annual training are often just a box-checking exercise, and security teams are left "flying blind" without real visibility into workforce risk.
The new User Risk product was announced to solve this. It unifies identity, behavior, and threat signals to secure your workforce, enable safe AI adoption, and build a security-first culture from a single platform.
User Risk detects "Shadow AI" by showing which employees are using which AI tools, delivers in-workflow "contextual nudges" to build better security habits, and provides each employee with a personal security score
User Risk is now available in a private beta program exclusively for existing UpGuard customers.
Meet your newest team member: The AI threat analyst
Speaker: Peter Brittliff, Senior Product Marketing Manager, Breach Risk
In today's cyber threat climate, attackers "log in" rather than "break in", leveraging a full-scale cybercrime supply chain on the dark web. The problem for security teams isn't a lack of signals, but being overwhelmed by noise and false positives.
UpGuard introduced the AI Threat Analyst, a new member of your team powered by the Grid and now part of UpGuard’s BreachRisk monitoring.
How it works:
Focused Monitoring:The analyst uses "Transforms" to monitor specific domains, brand names, or tokens and their variations across numerous attack vectors.
Reduces Noise: It employs specialized, source-aware agents for GitHub, dark web marketplaces, and stealer logs to understand context. Using your organization's context, it automatically triages threats, dismissing low-risk findings while logging them for transparency.
Provides Guided Action: For confirmed threats, the analyst generates clear, specific remediation guidance, transforming signals into guided actions so your team can respond effectively.
Status: The AI Threat Analyst is now available as an expansion to your existing UpGuard BreachRisk subscription.
Key takeaways:
The modern threat landscape has evolved; attackers no longer break in, they "log in" using credentials from a full-scale cybercrime supply chain on the dark web, leaving security teams overwhelmed with alert fatigue from disconnected tools.
UpGuard introduced the AI Threat Analyst, a new part of UpGuard BreachRisk, designed to expose threats across the open, deep, and dark web.
The analyst uses smart instructions and orchestrates specialized, source-aware agents for GitHub, dark web marketplaces, and stealer logs to cut through the noise.
It automatically triages threats by inferring risk based on your organization's context, dismissing low-risk findings and providing clear, actionable remediation guidance for confirmed threats.
The analyst is now available as an expansion to existing UpGuard BreachRisk subscriptions.
Sharpening vendor risk decisions with AI and precision
With a new regulatory change rolling out every six minutes globally, managing vendor risk is more complex than ever. UpGuard's updates to the Vendor Risk platform are built to help you lead with confidence and are grouped into three pillars: assessment precision, AI augmentation, and risk translated for everyone.
Features now available:
Assessment precision at scale: You can now use Templated Control Sets to automatically apply the right controls based on vendor tier or criticality, ensuring high-risk vendors get the scrutiny they need without over-assessing low-risk ones. We’ve also increased the number of security checks from 211 to 395 for deeper visibility.
AI that augments: The AI Risk Analyst is now embedded directly on the vendor summary page, giving you an always-on, up-to-date view of vendor posture, surfaced risks, and recommended next steps.
Risk translated for everyone: Instant Risk Assessment can now generate tailored, audience-ready commentary in seconds. Whether for a formal audit or an executive briefing, you get sharper, more credible output instantly.
Coming soon:
By the end of the quarter, you will be able to assess vendors against ISO 27001 and NIST CSF frameworks individually within the security profile. More security frameworks are on the roadmap.
Key takeaways:
The business landscape is evolving incredibly fast, with a new regulatory change rolling out somewhere in the world every six minutes. To keep up, Vendor Risk is evolving across three pillars: Assessment precision at scale, AI that augments, and risk translated for everyone.
Features available now include Template Control Sets for risk-aligned assessments, an increase in security checks from 211 to 395, and the AI Risk Analyst embedded on the vendor summary page to surface risks and recommend next steps.
The Instant Risk Assessment feature has been enhanced to generate tailored, audience-ready insights in seconds, whether it's a formal write-up for an auditor or a light summary for an executive briefing.
Coming by the end of the quarter, you'll be able to assess vendors against ISO 27001 and NIST CSF frameworks individually within the security profile, with more frameworks on the roadmap.
Fireside chat: Adapting security leadership for the age of AI
Host: Greg Pollock, Director of Research and Insights at UpGuard
Guest: Erica Carrara, VP and CISO at The Greenbrier Companies
In a fireside chat hosted by Greg Pollock and Erica Cararra discussed the complexities of security leadership in the age of AI.
Carrara explained that the modern cyber risk landscape is defined by a lack of visibility into assets and identities, as well as a rapidly changing threat landscape. She noted that attackers are increasingly targeting people rather than technology, with AI making threats like Business Email Compromise (BEC) more sophisticated by eliminating grammatical errors that were once tell-tale signs of phishing.
To manage the risks of internal AI adoption, Carrara advises treating AI like any other new technology by establishing an acceptable use policy and assessing what data and systems it will touch. She stressed that foundational practices like Identity and Access Management (IAM) and data-centric Asset Management are crucial for building the necessary guardrails.
When dealing with third parties using AI, she applies the same fundamental TPRM questions: where the data is housed, how it will be used, and what happens in the event of a breach.
For organizations just beginning to formalize their approach, Carrara recommends starting an AI governance steering committee with enthusiastic employees. To demonstrate ROI, she suggests focusing initial AI projects on mature, well-documented business processes with repeatable outcomes.
Finally, Carrara believes AI will not reduce the number of security jobs but will instead change the nature of entry-level roles in the industry.
Key takeaways:
The attack surface has shifted to people: Threat actors are increasingly targeting employees over technology, and the rise of AI is making phishing and Business Email Compromise (BEC) attempts more convincing and harder to detect.
Apply fundamental principles to AI: The adoption of AI should be governed by the same security principles as any other technology. This includes creating an acceptable use policy and understanding who will use the tool, what data it can access, and the worst-case scenario of a compromise.
Foundational practices are crucial for AI governance: To manage AI risk effectively, organizations must have strong foundational practices in place, particularly Identity and Access Management (IAM) and Asset Management that includes data classification.
Third-party AI risk is still third-party risk: When a vendor uses AI, security teams should ask the same fundamental questions about data governance: where the data is stored, how it is used, who has access to it, and what the breach notification process is.
Start AI governance with a steering committee: To begin formalizing AI governance, create a steering committee with stakeholders who are excited about the technology. Leverage resources like cyber insurance providers or AI itself to help draft policies and charters.
Focus AI adoption on mature processes: To show a clear return on investment (ROI) from AI, focus initial adoption on business processes that are already mature, documented, and produce repeatable outcomes.
AI will change, not eliminate, security jobs: AI is unlikely to reduce the overall number of information security jobs; however, it may replace or alter the nature of entry-level positions within the security field.
Related posts
Learn more about the latest issues in cybersecurity.