Bentley Cloud Services Status

Welcome to Bentley Systems health dashboard. Here you can follow the operational status of our products as well as other key features and services.

Any issue or interruptions we encounter will be listed on this page.

CrowdStrike Global Outage Impacting Bentley Systems Network
Incident Report for Bentley Systems, Inc
Postmortem

Bentley Systems RCA Report

Incident Start Date: 07-19-2024
Incident Start Time: 04:09 AM UTC
Incident End Date: 07-20-2024
Incident End Time: 06:00 PM UTC
Duration of Incident: 2,271 minutes (37 hours, 51 minutes)
Incident ID: INC0204562
Service Impacted: Bentley Systems Managed Services
Customers Impacted: All Global Managed Services Accounts

Impact

Services impacted were the Managed Service Global Infrastructure consisting of virtual cloud Windows servers.

All accounts using managed services were impacted.

Root Cause

The latest patch for CrowdStrike Falcon Sensor software negatively impacted Microsoft Windows Cloud servers.  This caused machines to experience BSOD failure errors (Blue Screen of Death) and continuous loop restarts.

Proactive Measures

On 07-20-2024 at 6:00 PM UTC, confirmation that full functionality was restored post CrowdStrike software release for all affected Microsoft Windows Cloud Virtual Machines.

Following an extended period of monitoring, Bentley has determined this incident has been resolved.

Next Steps

Further investigation of the event and conditions that led to the disruption, will be reviewed to mitigate external factors that might affect our services in the future.

Bentley Systems Contact Information

To obtain the local contact information please click on the link for: Bentley Offices

You can follow the status of major events on www.status.bentley.com

We apologize for any inconvenience this may have caused.  We recognize that any disruption of service is undesirable, so we will continue to research and evaluate potential changes to ensure a consistent, high-level quality of service.

Posted Jul 25, 2024 - 00:07 UTC

Resolved
Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:
Awareness - Virtual Machines
We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).
It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.
CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround.
Posted Jul 20, 2024 - 10:39 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024
Posted Jul 20, 2024 - 06:46 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024
Posted Jul 20, 2024 - 02:51 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 22:24 UTC on 19 July 2024
Posted Jul 19, 2024 - 23:08 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME" -- verbose



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 20:23 UTC on 19 July 2024
Posted Jul 19, 2024 - 21:16 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 19:10 UTC on 19 July 2024
Posted Jul 19, 2024 - 20:13 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on July 18, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 04:09 UTC on the 18th of July, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 04:09 UTC on the 18th of July, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 17:43 UTC on 19 July 2024
Posted Jul 19, 2024 - 18:11 UTC
Update
Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services, CrowdStrike has released the following statement:

CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted. This is not a security incident or cyberattack.

The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.

We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels.

Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.
Posted Jul 19, 2024 - 11:25 UTC
Update
We are continuing to work on a fix for this issue.
Posted Jul 19, 2024 - 08:58 UTC
Identified
CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes.
Posted Jul 19, 2024 - 08:44 UTC
Update
We are continuing to investigate this issue.
Posted Jul 19, 2024 - 08:20 UTC
Update
We are continuing to investigate this issue.
Posted Jul 19, 2024 - 08:14 UTC
Update
Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.
Posted Jul 19, 2024 - 08:02 UTC
Investigating
Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.
Posted Jul 19, 2024 - 07:53 UTC
This incident affected: ALIM (Session Service), Base Services (Bentley.com, Managed Services (Global Status)), and ProjectWise Services (Deliverables Management, Portfolio Insights, Project Insights, ProjectWise 365, ProjectWise Design Integration, ProjectWise Drive, ProjectWise Web View).