Product

Bridging the Cloud Security Gap: Real-World Use Cases for Threat Monitoring

May 7, 2025

5

minute read

At Exaforce, as we work with our initial set of design partners to reduce the human burden on SOC teams, we’re gaining valuable insights into current cloud usage patterns that reveal a larger and more dynamic threat surface. While many organizations invest in robust security tools like CSPM, SIEM, and SOAR, these solutions often miss the nuances of evolving behaviors and real-time threats. This blog examines common cloud security anti-patterns and offers actionable guidance, including practical remediation measures, to continuously monitor, detect, and effectively respond to emerging threats.

Use Case: Single IAM User With Long Term Credentials Accessed From Multiple Locations

A device manufacturing company relies on a single IAM user with long term credentials for various tasks such as device testing, telemetry collection, and metrics gathering across multiple factories in different geographic regions. This consolidated identity is used from varied operating systems (e.g., Linux, Windows) and environments, which amplifies risk.

AWS IAM user X accessing multiple S3 buckets from processes running in factories located in different locations.

Threat Vectors and Monitoring Recommendations

To mitigate the risks associated with such a setup, focus on continuous threat monitoring with these priority measures:

  1. IP Allow-Listing
  • Define and enforce an allowed list of IP addresses for each factory.
  • Alert on any access attempts from unauthorized IPs.
  • Tool: AWS using policy conditions. Below is an example to deny everything but CIDR 192.0.2.0/24, 203.0.113.0/24
AWS IAM policy to deny all requests unless requests originate from specified IP address ranges.

2. Resource Access Monitoring

  • Continuously monitor and log which resources the IAM user accesses.
  • Correlate access patterns with expected behavior for each factory or task.
  • Tool: SIEM platforms integrated with cloudtrail logs.

3. Regular Credential Rotation

  • Implement strict policies to rotate long term credentials periodically.
  • Automate token rotation and integrate alerts for unusual rotation delays.

4. User Agent and Device Validation

  • Identify and allow only a predefined list of acceptable user agents (e.g., specific OS versions like Linux and Windows Server) for each use case.
  • Flag anomalies such as access from unexpected operating systems (e.g., macOS when not approved).
  • Tool: SIEM platforms to co-related EDR and AWS cloudtrail logs and generate detections

Use Case: Long-Term IAM User Credentials in GitHub Pipelines

One of our SaaS provider partners is using long-term AWS IAM user credentials directly into their GitHub Actions CI/CD pipelines as static GitHub secrets, allowing automation scripts to deploy services into AWS. This practice poses significant security risks; credentials stored in CI/CD pipelines can easily become exposed through accidental leaks or external breaches—as seen recently with Sisense (April 2024) and TinaCMS (Dec 2024)—enabling attackers to gain unauthorized cloud access, escalate privileges, and exfiltrate sensitive data.

GitHub pipelines using long-term AWS IAM user access keys.

Threat Vectors and Monitoring Recommendations

To monitor and detect threats associated with this anti-pattern, consider these prioritized measures:

1. Credential Usage Monitoring

  • Continuously monitor IAM user activity and set alerts for any anomalous actions, such as unusual access patterns, region shifts, or privilege escalation attempts.
  • Tool: SIEM platform integrated with cloudtrail logs.

2. Regular Credential Rotation

  • Implement strict policies to rotate long term credentials periodically.
  • Automate token rotation and integrate alerts for unusual rotation delays.

Remediation: Short-lived Credentials via OIDC

Transition to GitHub Actions’ OpenID Connect (OIDC) integration, enabling temporary credentials instead of embedding long-term keys, minimizing risk exposure.

Ineffective Use of Permission Sets in Multi-Account Environments

A cloud-first SaaS provider is misusing AWS permission sets by provisioning direct access in the management accounts where sensitive permission sets and policies are defined instead of correctly provisioning them across member accounts. This setup complicates policy management and leaves the management account largely unmonitored, creating blind spots where identity threats can emerge before affecting production or staging.

Complex IAM access management across multiple accounts.

Threat Vectors and Monitoring Recommendations

1. Monitoring Management Account Activity

  • Monitor all IAM and policy changes in the management account using AWS Tools: SIEM Tool integrated with CloudTrail logs. Detections should trigger alerts on any modifications to permission sets or cross-account role assumptions.

2. Misconfigured Trust Relationships:

  • Audit and continuously validate trust policies for cross-account roles to ensure they only allow intended access.
  • Tools: AWS Config rules to flag deviations from approved configurations.

3. Policy Drift and Unauthorized Changes:

  • Implement automated periodic reviews of permission sets and associated IAM roles. This ensures that any drift or unauthorized changes are quickly detected and remediated.
  • Tools: SIEM Tool integrated with CloudTrail logs.

Root User Access Delegated to a Third Party

Delegating root user access to a third party for managing AWS billing and administration may seem low-risk, but it leaves the company without direct oversight of its highest-privilege account. When the root credentials including long-term passwords and MFA tokens are controlled externally, the risk escalates dramatically: if the third party is compromised or mismanages their controls, attackers could gain unrestricted access to the entire AWS environment.

Third party with root user access to your AWS accounts.

Threat Vectors and Monitoring Recommendations

  1. Monitoring Unauthorized Root Activity
  • Monitor all root user actions via CloudTrail and SIEM alerts for any anomalous behavior.
  • Tools: SIEM Tool integrated with CloudTrail logs.
  1. Third-Party Compromise
  • Regularly audit third-party access and security posture
  • Tool: Identity access management tool.

Remediation: Centralized root access

Remediate by removing root access and migrating to centrally manage root access using AssumeRoot, which issues short-term credentials for privileged tasks.

Contact us to learn how Exaforce leverages Exabots to address these challenges.

News
Product

Reimagining the SOC: Humans + AI bots = Better, faster, cheaper security & operations

April 17, 2025

5

minute read

Announcing our $75M Series A to fuel our mission

Attack surfaces continue to grow as enterprises widen their digital footprint and AI goes mainstream into various aspects of the business. Meanwhile, CISOs and CIOs continue to struggle with their defenses – they all want their security operations centers (SOC) to be more efficacious, productive, and scalable.

A few years back, some of us were at F5 and Palo Alto Networks defending mission-critical applications and cloud services for global banks, major social media networks, and video collaboration platforms. We saw advanced cyber threats — from nation-states to organized crime — constantly probing our customer's’ defenses. Meeting strict 24x7x365 SLAs with limited talent was an uphill battle and our SOC teams worked tirelessly. No matter how good we got, it felt like we were always reacting, never truly getting ahead.

Simultaneously, other members of our founding team were at Google pioneering large language models (LLMs).Their main focus was on improving the quality and consistency of output from these frontier models. AI showed massive promise to automate human work, but suffered from a few inherent flaws — long-short term memory, consistency of reasoning, cost of analyzing a very large data set, etc. 

Together, we reached the same conclusion that the problems of security and operations cannot be solved by hiring more people or building a bigger foundation model or a smaller security-specific model — the solution will require grounds-up re-thinking! 

The magical combination: Humans + Bots

We founded Exaforce with a singular goal: 10X speed-up for tasks done by humans.  And nowhere is this work more complex than in enterprise security and operations. We have made great strides towards this goal using our task-specific AI agents called “Exabots” and advanced data exploration. We think of this platform as an Agentic SOC Platform. Our goal with Exabots from conception has been to help automate difficult tasks, not the simple or low-skill tasks that you see in demos. 

For the last 18 months, we have been working with our design partners to train Exabots to help SOC analysts, detection engineers, and threat hunters. Exabots augment them to auto-triage alerts, detect breaches in critical cloud services, and simplify the process of threat hunting. We are seeing up to 60X speed-up in day-to-day tasks alongside dramatic improvement in efficacy and auditability. 

Our light-bulb moment: Multi-Model AI Engine

Our ex-Google team knew from day one that no foundation model will be able to deliver consistency of reasoning needed for human-grade analysis of threat alerts or do the analysis of all runtime events while meeting the cost points needed to detect breaches. As a result, we had to innovate on a brand new approach by building a new multi-model AI engine that combines three different types of AI models that we have been developing for the last 18 months:

  • Semantic Model: imbibes human-grade understanding of runtime events/logs, cloud configuration, code, identity data, and threat feeds. 
  • Behavioral Model: that learns patterns of actions, applications, data, resources, identities (humans and machines), locations, etc. 
  • Knowledge Model: LLM that performs reasoning on this data, executes dynamically generated workflows, analyzes historical tickets in ITSM (eg. Jira, ServiceNow).   

Together, these models work in perfect harmony to overcome the inherent flaws of a LLM-only approach (long-short term memory, consistency of reasoning, cost of reasoning over a very large data set). This AI engine can analyze all the data at cost points that are unmatched in the industry and yet deliver human-grade analysis!

Backed by leading investors: $75 Million in Series A

Today, we’re thrilled to announce $75 million in Series A funding, led by Khosla Ventures and Mayfield, alongside Thomvest, Touring Capital, and others who share our belief in augmenting today’s hard working cyber professionals with AI that works consistently and reliably! 

This investment allows us to scale our investment in R&D to refine our multi-model AI engine, train Exabots to perform more and more complex tasks, and onboard more design partners eager to see how an agentic SOC can transform their security operations. 

A glimpse into the future of SOC

With Exaforce, our design partners are already seeing multitude of benefits for their SOC teams: 

  • Higher Efficacy: much higher consistency and quality in investigation of complex threats than their existing in-house SOC or external partners (MSSP and MDR)
  • Better Productivity: much faster in detecting and responding to complex threats to their cloud services compared to existing SIEMs/CDR solutions.  
  • Cheaper to scale: automated handling of challenging and tedious tasks (data collection, analysis, user and manager confirmations, ticket analysis, etc) along with the ability to scale defense on-demand without adding headcount or new contracts with MDR/MSSP 

See what the Wall Street Journal has to say about our funding! 

What’s next

Though we’re very excited about launching the company, our journey is just beginning! We’ll continue collaborating with more design partners to expand coverage, refine AI workflows, and ensure that humans always remain in control. Our goal is to build a SOC where AI handles the busywork and humans focus on true threats — creating a security environment that is truly more consistent in results, faster in response, and lower in TCO.

If you want a SOC that is composed of superhuman analysts, detection engineers, or threat hunters - request a demo to learn more. Together, we can build the future of the SOC!

Industry

Safeguarding against Github Actions(tj-actions/changed-files) compromise

March 16, 2025

4

minute read

How users can detect, prevent, recover from supply chain threats with Exaforce

Since March 14th, 2025, Exaforce has been very busy helping our design partners overcome a critical attack to the software supply chain through Github. In the last 6 months, this is a second major attack experienced by our design partners to their cloud deployments and we are grateful to have delivered value to them.

What Happened?

On March 14, 2025, security researchers detected unusual activity in the widely used GitHub Action tj-actions/changed-files. This action, primarily designed to list changed files in repositories, suffered a sophisticated supply chain compromise. Attackers injected malicious code into nearly all tagged versions through a malicious commit (0e58ed8671d6b60d0890c21b07f8835ace038e67).

The malicious payload was a base64-encoded script designed to print sensitive CI/CD secrets — including API keys, tokens, and credentials — directly into publicly accessible GitHub Actions build logs. Public repositories became especially vulnerable, potentially allowing anyone to harvest these exposed secrets.

Attackers retroactively updated version tags to point to the compromised commit, meaning even pinned tagged versions (if not pinned by specific commit SHAs) were vulnerable. While the script didn’t exfiltrate secrets to external servers, it exposed them publicly, leading to the critical vulnerability CVE-2025–30066.

How We Helped Our Design Partners

Leveraging the Exaforce Platform, we swiftly identified all customer repositories and workflows using the compromised action. Our analysis included:

Quickly querying repositories and workflows across customer accounts.

Identifying affected secrets used by compromised workflows.

  • Directly communicating these findings and recommended remediation actions to affected customers.

Our security team proactively informed customers detailing specific impacted workflows and guiding them to rotate compromised secrets immediately.

What Should You Do?

Use the below search url to look for impacted repositories. Replace the string <Your Org Name> with your github org name.

https://github.com/search?q=org%3A<Your Org Name>+tj-actions%2Fchanged-files+&type=issues

If your workflows include tj-actions/changed-files, take immediate action.

  • Stop Using the Action Immediately: Remove all instances from your workflows across all branches.
  • Review Logs: Inspect GitHub Actions logs from March 14–15, 2025, for exposed secrets. Assume all logged secrets are compromised, especially in public repositories.
  • Rotate Secrets: Immediately rotate all potentially leaked credentials — API keys, tokens, passwords.
  • Switch to Alternatives: Use secure alternatives or inline file-change detection logic until a verified safe version becomes available.

Lessons Learned

This breach highlights critical vulnerabilities inherent in software supply chains. Dependence on third-party actions requires stringent security practices:

  • Pin your third party GitHub Actions to commit SHAs instead of versions
  • Wherever possible, rather than relying on a third-party action you can use native Git commands within your workflow. This avoids external dependencies, reducing supply chain risks.
  • Restrict permissions via minimally scoped tokens (like GITHUB_TOKEN).
  • Implement continuous runtime monitoring including enabling audit logs, action logs, and capturing detailed resource information to promptly detect anomalous behavior and facilitate comprehensive investigations.

By adopting these best practices, organizations can significantly reduce the risk posed by compromised third-party software components.

Reach out to us contact@exaforce.com if you’d like to understand how we protect GitHub and other data sources from supply chain compromises and other threats.

Industry

Npm provenance: bridging the missing security layer in JavaScript libraries

November 6, 2024

10

minute read

Why verifying package origins is crucial for secure JavaScript applications

The recent security incident involving the popular lottie-player library once again highlighted the fragility of the NPM ecosystem’s security. While NPM provides robust security features like provenance attestation, many of the most downloaded packages aren’t utilising these critical security measures.

What is NPM Provenance?

NPM provenance is a security feature that creates a verifiable connection between a published package and its source code repository introduced last year. When enabled, it provides cryptographic proof that a package was built from a specific GitHub repository commit using GitHub Actions or Gitlab runners. This helps prevent supply chain attacks where malicious actors could publish compromised versions of popular packages. However, it’s important to note that this security relies on the integrity of your build environment itself — if your GitHub/GitLab account or CI/CD pipeline is compromised, the provenance attestation could still be generated for malicious code. Therefore, securing your source control and CI/CD infrastructure with strong access controls, audit logging, and regular security reviews remains critical.

The Current State of Popular NPM Packages

Let’s examine some of the most downloaded NPM packages and their provenance status:

Among the 2,000 most downloaded packages on jsDelivr, 205 packages have a public GitHub repository and directly publish to npm using GitHub Workflows. However, only 26 (12.6%) of these packages have enabled provenance — a security feature that verifies where and how a package was built. Making this incremental change to their GitHub workflows would be a significant security improvement for the entire community at large.

Critical Gaps in NPM’s Security Model

Server-Side Limitations

The NPM registry currently lacks critical server-side enforcement mechanisms:

1. No Mandatory Provenance

  • Packages can be published without any attestation
  • No way to enforce provenance requirements for specific packages or organizations
  • Registry accepts packages with or without verification

2. Missing Policy Controls

  • Organizations cannot set requirements for package publishing
  • No ability to enforce provenance for specific package names or patterns similar to git branch protection
  • No automated verification of build source authenticity

3. Version Control

  • No mechanism to prevent version updates without matching provenance
  • Cannot enforce stricter requirements for major version updates

Client-Side Verification Gaps

npm/yarn client tools also lack essential security controls:

1. Installation Process

2. Missing security Features

  • No built-in flags to require provenance
  • Cannot enforce organization-wide attestation policies
  • No way to verify single package attestation

3. Package.json Limitations

The Lottie-Player Incident

The recent compromise of the lottie-player library serves as a stark reminder of what can go wrong. The attack timeline:

  1. Attackers gained access to the maintainer’s NPM account
  2. Published a malicious version of the package
  3. Users automatically received the compromised version through unpinned dependency updates and direct CDN links
  4. Malicious code executed on affected systems

Had the provenance attestation been enforced at either the registry or client level, this attack could have been prevented.

Why Aren’t More Packages Using Provenance?

Several factors contribute to the low adoption of NPM provenance:

  1. Awareness Gap: Many maintainers aren’t familiar with the feature
  2. Implementation Overhead: Requires GitHub Actions workflow modifications
  3. Legacy Systems: Existing build pipelines may need significant updates
  4. False Sense of Security: Reliance on other security measures like 2FA
  5. Lack of Enforcement: No pressure to implement due to missing registry requirements

To enable provenance for your NPM packages:

<script src="https://gist.github.com/pupapaik/9cc17e02a0b204281a5c14d8bc56aabb#file-npm-publish-workfow-yaml.js"></script>

Or do it in package.json in

<script src="https://gist.github.com/pupapaik/fc640fbadf4581ad92b2143c7391e791#file-package-provenance-json.js"></script>

Package Provenance check

The npm command audit can check the integrity and authenticity of packages, but it doesn’t allow you to verify individual packages — only all packages in a project at once.

NPM with invalid attestations

Since the npm CLI doesn’t provide an easy way to do this, I wrote a simple script to check the integrity and attestation of individual packages. This script makes it straightforward to validate each package.

This script can be used in a GitHub Workflow on the client side or as a monitoring tool to continuously check the attestation of upstream packages.

Client-Side Script Integrity Verification

While NPM provenance helps secure your package ecosystem, web applications loading JavaScript directly via CDN links need additional security measures. The Subresource Integrity (SRI) mechanism provides cryptographic verification for externally loaded resources. The Lottie-player attack was particularly devastating due to three common but dangerous practices:

1. Using latest tag

2. Missing integrity check

3. No Fallback Strategy

SRI works by providing a cryptographic hash of the expected file content. The browser:

  1. Downloads the resource
  2. Calculates its hash
  3. Compares it with the provided integrity value
  4. Blocks execution if there’s a mismatch

When integrity check verification fails, the browser does not allow javascript executing with the sample error

Recommendations for the Ecosystem

1. Package Maintainers:

  • Enable provenance attestation immediately
  • Document provenance status in README files
  • Use GitHub Actions for automated, verified builds

2. Package Users:

  • Check provenance status before adding new dependencies
  • Prefer packages with enabled provenance. Check websites such as TrustyPkg to understand its trustworthiness based on activity, provenance, and more
  • Monitor existing dependencies for provenance adoption

3. Platform Providers:

  • Make provenance status more visible in NPM registry UI
  • Provide tools for bulk provenance verification
  • Consider making provenance mandatory for high-impact packages
  • Implement server-side enforcement mechanisms
  • Add client-side verification tools

4. NPM Registry

  • Add organization-level provenance requirements
  • Implement mandatory attestation for popular packages
  • Provide API endpoints for provenance verification
  • Provide package approval process / workflow

Conclusion

The security of the NPM ecosystem affects millions of applications worldwide. The current lack of enforcement mechanisms at both the registry and client levels creates a significant security risks. While provenance attestation is available, the inability to enforce it systematically leaves the ecosystem vulnerable to supply chain attacks.

The NPM team should prioritize implementing both server-side and client-side enforcement mechanisms. Until then, the community must rely on manual verification and best practices. Package maintainers should enable provenance attestation immediately, while users should demand better security controls and verification tools.

Only by working together to improve NPM’s infrastructure can we create a more secure JavaScript ecosystem. At ExaForce, we’re committed to taking the first step by helping open-source libraries adopt provenance attestation in their publishing process.

References

[1] Resolution of Security Incident with @lottiefiles/lottie-player Package

[2] Supply Chain Security Incident: Analysis of the LottieFiles NPM Package Compromise

[3] TrustyPkg Lottie verification database for developers to consume secure open source libraries

Industry

Exaforce’s response to the LottieFiles npm package compromise

November 1, 2024

5

minute read

Analyzing the supply chain attack and steps taken to secure the ecosystem

October 30th, 2024, Exaforce’s Incident Response team was engaged by LottieFiles following the discovery of a sophisticated supply chain attack targeting their popular lottie-player NPM package.

  • The incident involved the compromise of a package maintainer’s credentials through a phishing attack, resulting in the distribution of malicious code designed to target crypto currency wallets used in the DeFi and Web3 community.
  • LottieFiles moved rapidly and were jointly able to contain the attack within an hour, minimizing potential impact on the package’s extensive user base, estimated at over 11 million daily active users.
  • In the entire process, LottieFiles demonstrated commendable speed and commitment to its community of users.

Exaforce is committed to ensuring LottieFiles is able to serve its community with the trust it has gained over the years. Key actions taken:

  • Helping the team at LottieFiles implement NPM package provenance attestation, providing cryptographic verification of package origins, build processes, continuous detection & response.
  • Continue being actively engaged with LottieFiles to strengthen their security posture and ongoing monitoring of critical systems.
  • A follow up post incident blog where we will share additional learnings and suggestions on best practices will be made available.

Official details of the incident report here:

About LottieFiles and NPM Packages

LottieFiles has revolutionized web animation by providing developers with tools to implement lightweight, scalable animations across platforms. At the heart of their ecosystem lies the lottie-player NPM package, which serves over 9 million lifetime users and averages 94,000 weekly downloads. NPM packages form the backbone of modern JavaScript development, acting as building blocks that developers use to construct applications efficiently and securely. In the software supply chain, these packages represent both incredible value and potential vulnerability points, making their security paramount.

Attack Overview and Impact

The incident began with a sophisticated phishing campaign targeting LottieFiles developers. The attacker (email notify.npmjs@pm.me) sent a carefully crafted phishing email to a developer’s private Gmail account that was registered with NPM with an invitation to collaborate on the @lottiefiles/jlottie npm package. Through this social engineering attack, the threat actor successfully harvested both NPM credentials and two-factor authentication codes from the targeted developer.

Using compromised credentials, the attacker executed their campaign on October 30th, 2024, between 19:00 UTC and 20:00 UTC, publishing three malicious versions of the lottie-player package (2.0.5, 2.0.6, and 2.0.7) directly to the NPM registry. This manual publication bypassed LottieFiles’ standard GitHub Actions deployment pipeline.

The attack’s distribution mechanism proved particularly effective due to the nature of modern web development practices. The compromised versions rapidly propagated through major Content Delivery Networks (CDNs), affecting websites configured to automatically pull the latest library version. This auto-update feature, typically a security benefit, became an attack vector that significantly amplified the incident’s reach.

Important Lessons Learned

In the process of handling this incident we’ve come to the conclusion that the current NPM package distribution model presents significant security challenges that should concern enterprise organizations relying on it for their JavaScript dependencies. While Github (after its acquisition of NPM and subsequent deprecation of NPM Enterprise) is promoting a migration strategy, there are critical security gaps with existing npmjs.com offerings — lack of SSO for users, no logs for upstreaming of packages or usage of packages, limited integrity checks, lack of OIDC support for automated systems, and no controls on distribution through CDNs. These limitations collectively represent a substantial security deficit in what has become the backbone of modern JavaScript development, potentially exposing organizations to supply chain attacks and compliance issues. We, along with Lottie Files will work with npmjs and Github to improve the current gaps in such a vital software supply chain.

Incident Detection and Response Timeline

The incident was first reported through LottieFiles’ community website at approximately 19:24 UTC on October 30th, when users began noticing suspicious wallet connection prompts. Exaforce’s incident response team, working in conjunction with LottieFiles, implemented immediate countermeasures:

  • October 30th, 19:24 UTC: Initial detection and report
  • October 30th, 19:30 UTC: Impacted package versions (2.0.5, 2.0.6, 2.0.7) deleted
  • October 30th, 19:35 UTC: Revocation of compromised NPM access tokens
  • October 30th, 19:58 UTC: Publication of clean version 2.0.8
  • October 31st, 02:35 UTC: Removal of affected developer’s NPM access
  • October 31st, 02:40 UTC: Access of individual developers to NPM repositories revoked
  • October 31st, 02:45 UTC: All NPM keys as well as other systems had their keys revoked and NPM automations suspended
  • October 31st, 03:30 UTC: Laptop in question quarantined for further post-incident analysis
  • October 31st, 03:35 UTC: Begin forensics on the compromised laptop
  • October 31st, 03:55 UTC: Coordination with major CDN providers to purge compromised files
  • October 31st, 04:00 UTC: First official X (Twitter) post by LottieFiles
  • October 31st, 20:06 UTC: All infected files removed from downstream CDNs (cdnjs.com, unpkg.com) with the help of the community operators
  • November 1st, 01:59 UTC: Second official update on X (Twitter) post by LottieFiles

Hardening Effort Towards a More Secure LottieFiles

In response to this incident, we are working with Lottie Files to implement comprehensive security improvements across their infrastructure. Key measures include:

  1. Implementation of NPM package provenance attestation and continuous monitoring of this, providing cryptographic verification of package origins and build processes. This ensures that packages are built and published through verified GitHub workflows only, eliminating the risk of direct human publishing.
  2. Understanding the posture of human and machine identities in critical systems. Machine identities, including credentials, are the most common threat vector in the cloud today. Gaining visibility into these identities, how they are being used and by whom is critical to establishing a strong cloud security posture.
  3. Real-time monitoring and threat detection coverage across all critical systems leveraging a combination of Exaforce AI-BOTs and our Managed Cloud Detection & Response service.

Stay tuned for a follow up where we will share our learnings helping Lottie establish industry leading Security Engineering and Operations by augmenting their existing teams with task specific AI-BOTs.

Only by working together to improve NPM’s infrastructure can we create a more secure JavaScript ecosystem. At ExaForce, we’re committed to taking the first step by helping open-source libraries adopt provenance attestation in their publishing process.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Interested in learning more?

Let us show you what Exabots can do for your team

Request demo