Bentley Cloud Services Status

Welcome to Bentley Systems health dashboard. Here you can follow the operational status of our products as well as other key features and services.

Any issue or interruptions we encounter will be listed on this page.

In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jul 27, 2024 - 08:00 UTC
Scheduled - On July 27th, 2024, 8AM UTC during a scheduled maintenance window of 1 hour ProjectWise Deliverables Management service will be updating its server URL to https://service.pwdm.bentley.com. During this time, you might experience some service interruptions.
Both the old URL (https://connect-bts.bentley.com) and the new one will be supported until October 31st. After that only the new server URL will be valid.
If you opened your firewall for Deliverables Management server IP or server URL, you need to update your configuration to let through the new service URLs. New URLs do not use static IP, so using server IP is no longer an option.
You can view the changes here: https://bentleysystems.service-now.com/community?id=kb_article_view&sys_kb_id=0431c3361bcb06103c8c7510cd4bcb2e
If you have any questions, please reach out to Bentley support.

Jul 27, 2024 08:00-09:00 UTC
Base Services Operational
90 days ago
99.48 % uptime
Today
Bentley Communities Operational
90 days ago
99.69 % uptime
Today
Bentley.com Operational
90 days ago
98.52 % uptime
Today
CONNECT Center Operational
90 days ago
100.0 % uptime
Today
Learn Operational
90 days ago
100.0 % uptime
Today
Managed Services (Global Status) Operational
90 days ago
98.14 % uptime
Today
Phone System Operational
90 days ago
98.17 % uptime
Today
Product Documentation Operational
90 days ago
100.0 % uptime
Today
Software Downloads Operational
90 days ago
100.0 % uptime
Today
Support Portal Operational
90 days ago
99.74 % uptime
Today
URL Directory Service (BUDDI) Operational
90 days ago
100.0 % uptime
Today
Virtuosity Digital Storefront Operational
90 days ago
100.0 % uptime
Today
Identity Management Systems (IMS) Operational
90 days ago
99.99 % uptime
Today
Authentication / Login Operational
90 days ago
99.99 % uptime
Today
User Management Operational
90 days ago
99.98 % uptime
Today
iTwin Platform Operational
90 days ago
99.99 % uptime
Today
Access Control Operational
90 days ago
100.0 % uptime
Today
Carbon Calculation Operational
90 days ago
100.0 % uptime
Today
Changed Elements Operational
90 days ago
100.0 % uptime
Today
Clash Detection Operational
90 days ago
100.0 % uptime
Today
Export Operational
90 days ago
100.0 % uptime
Today
Forms Operational
90 days ago
100.0 % uptime
Today
iModels Operational
90 days ago
99.98 % uptime
Today
iModels OData Operational
90 days ago
100.0 % uptime
Today
Issues Operational
90 days ago
100.0 % uptime
Today
iTwins Operational
90 days ago
99.98 % uptime
Today
Library Operational
90 days ago
100.0 % uptime
Today
Mesh Export Operational
90 days ago
100.0 % uptime
Today
PnID to iTwin Operational
90 days ago
100.0 % uptime
Today
Projects Operational
90 days ago
99.99 % uptime
Today
Property Validation Operational
90 days ago
100.0 % uptime
Today
Reality Analysis Operational
90 days ago
100.0 % uptime
Today
Reality Conversion Operational
90 days ago
100.0 % uptime
Today
Reality Data Operational
90 days ago
100.0 % uptime
Today
Reality Management Operational
90 days ago
100.0 % uptime
Today
Reality Modeling Operational
90 days ago
100.0 % uptime
Today
Reporting Operational
90 days ago
100.0 % uptime
Today
Saved Views Operational
90 days ago
100.0 % uptime
Today
Sensor Data Operational
90 days ago
100.0 % uptime
Today
Storage Operational
90 days ago
100.0 % uptime
Today
Synchronization Operational
90 days ago
100.0 % uptime
Today
Transformations Operational
90 days ago
100.0 % uptime
Today
Users Operational
90 days ago
100.0 % uptime
Today
Visualization Operational
90 days ago
99.93 % uptime
Today
Webhooks Operational
90 days ago
100.0 % uptime
Today
iTwin Products Operational
90 days ago
99.8 % uptime
Today
iTwin Experience Operational
90 days ago
100.0 % uptime
Today
iTwin IoT Operational
90 days ago
98.45 % uptime
Today
OpenCities Planner Operational
90 days ago
100.0 % uptime
Today
OpenFlows WaterSight Operational
90 days ago
100.0 % uptime
Today
OpenTower iQ Operational
90 days ago
100.0 % uptime
Today
OpenUtilities (Digital Twin Services) Operational
90 days ago
100.0 % uptime
Today
PlantSight Operational
90 days ago
100.0 % uptime
Today
Reality Data View Operational
90 days ago
100.0 % uptime
Today
iTwin Services Operational
90 days ago
100.0 % uptime
Today
Design Insights Operational
90 days ago
100.0 % uptime
Today
Design Validation Operational
90 days ago
100.0 % uptime
Today
GeoCoordination Service Operational
90 days ago
100.0 % uptime
Today
iModel Manager Operational
90 days ago
100.0 % uptime
Today
Network Topology Service Operational
90 days ago
100.0 % uptime
Today
Platform APIs Operational
90 days ago
100.0 % uptime
Today
Platform Developer Portal Operational
90 days ago
100.0 % uptime
Today
Reality Analysis Service Operational
90 days ago
100.0 % uptime
Today
Reality Conversion Service Operational
90 days ago
100.0 % uptime
Today
Reality Management Service Operational
90 days ago
100.0 % uptime
Today
Reality Modeling Service Operational
90 days ago
100.0 % uptime
Today
ProjectWise Services Under Maintenance
90 days ago
99.65 % uptime
Today
Components Center Operational
90 days ago
100.0 % uptime
Today
Deliverables Management Under Maintenance
90 days ago
99.51 % uptime
Today
Design Review Operational
90 days ago
100.0 % uptime
Today
Forms Operational
90 days ago
100.0 % uptime
Today
Issue Resolution Operational
90 days ago
100.0 % uptime
Today
Portfolio Insights Operational
90 days ago
99.51 % uptime
Today
Project Insights Operational
90 days ago
99.51 % uptime
Today
Project Share Operational
90 days ago
99.99 % uptime
Today
Project Synchronization Operational
90 days ago
100.0 % uptime
Today
ProjectWise 365 Operational
90 days ago
99.51 % uptime
Today
ProjectWise Design Integration Operational
90 days ago
98.47 % uptime
Today
ProjectWise Drive Operational
90 days ago
99.51 % uptime
Today
ProjectWise Web View Operational
90 days ago
99.51 % uptime
Today
Subscription Entitlement Services Operational
90 days ago
99.99 % uptime
Today
Alerting Service Operational
90 days ago
100.0 % uptime
Today
CONNECTION Client Operational
90 days ago
100.0 % uptime
Today
Entitlement Management Portal Operational
90 days ago
99.98 % uptime
Today
Policy Service Operational
90 days ago
100.0 % uptime
Today
Roles and Permissions Operational
90 days ago
100.0 % uptime
Today
Subscription Analytics Operational
90 days ago
100.0 % uptime
Today
Usage Logging Service Operational
90 days ago
100.0 % uptime
Today
SYNCHRO Operational
90 days ago
100.0 % uptime
Today
SYNCHRO 4D Operational
90 days ago
100.0 % uptime
Today
SYNCHRO Control Operational
90 days ago
100.0 % uptime
Today
SYNCHRO Cost Operational
90 days ago
100.0 % uptime
Today
ALIM Operational
90 days ago
98.57 % uptime
Today
Session Service Operational
90 days ago
98.57 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Jul 27, 2024

Unresolved incident: ProjectWise Deliverables Management service will be switching its deployment model and server name on July 27th.

Jul 26, 2024

No incidents reported.

Jul 25, 2024

No incidents reported.

Jul 24, 2024

No incidents reported.

Jul 23, 2024

No incidents reported.

Jul 22, 2024

No incidents reported.

Jul 21, 2024
Postmortem - Read details
Jul 25, 00:08 UTC
Resolved - All systems have been successfully recovered following the Crowdstrike Outage and are currently under active monitoring by our support teams.
Jul 21, 13:14 UTC
Update - All systems have been successfully recovered following the Crowdstrike Outage and are currently under active monitoring by our support teams.
Jul 20, 19:22 UTC
Update - Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

The majority of the services have now been restored and most services should be operational. Bentley is still continuing some mitigation work, which involves restarting systems, therefore users can experience intermittent issues.

Jul 20, 16:13 UTC
Monitoring - Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

The majority of the services have now been restored and most services should be operational. Bentley is still continuing some mitigation work, which involves restarting systems, therefore users can experience intermittent issues.

Jul 20, 15:48 UTC
Identified - Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:
Awareness - Virtual Machines
We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).
It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.
CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround.

Jul 20, 10:41 UTC
Jul 20, 2024
Postmortem - Read details
Jul 25, 00:07 UTC
Resolved - Our dedicated teams are actively working to restore all Bentley systems and services to full functionality following the effects of the global update issued by third-party application, CrowdStrike. We are committed to providing you with continuous updates. We value your patience and cooperation while we work to resolve this matter.

Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:
Awareness - Virtual Machines
We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).
It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.
CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround.

Jul 20, 10:39 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024

Jul 20, 06:46 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 02:30 UTC on 20 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 02:34 UTC on 20 July 2024

Jul 20, 02:51 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME -- verbose"

**NOTE: For encrypted VM run the following command:**

"az vm repair create -g RGNAME -n BROKENVMNAME --unlock-encrypted-vm --verbose"



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 22:24 UTC on 19 July 2024

Jul 19, 23:08 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

Option 1

We recommend customers that can, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Option 2

Customers can attempt to remove the C-00000291*.sys file on the disk directly and potentially not need to perform detach and reattach disc.

Open Azure AZ CLI and run the following steps

1.Create rescue VM with

// Creates a rescue VM, same size as the original VM in the same region. Asks for Username and password.

// Makes a copy of the OS Disk of the problem VM

// Attaches the OS Disk as Data disk to the Rescue VM

//az vm repair create -g {your-resource-group} -n {vm-name} --verbose

"az vm repair create -g RGNAME -n VMNAME" -- verbose



2.Then run:

// Runs the mitigation script on the Rescue VM which fixes the problem (on the os-disk copy attached as a data disk)

//az vm repair run -g {your-resource-group} -n {vm-name} --run-id win-crowdstrike-fix-bootloop -verbose

"az vm repair run -g RGNAME -n BROKENVMNAME -- run-id win-crowdstrike-fix-bootloop -- run-on-repair -- verbose"



3.Final step is to run:

// Removes the Fixed OS-Disk Copy from the rescue VM

// Stops the problem VM but it is not deallocated

// Attaches the fixed OS-Disk to the original VM

// Starts the original VM

// Gives prompts to delete the repair vm

//az vm repair restore -g {your-resource-group} -n {vmname} --verbose

"az vm repair restore -g RGNAME -n BROKENVMNAME" --verbose

Note: These steps would work for both managed and unmanaged disks. In case, if you run into capacity issues, please retry after some time.

Option 3

Customers can attempt repairs on the OS disk by following these instructions:

Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 20:23 UTC on 19 July 2024

Jul 19, 21:16 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on 19 July 2024 at 04:09UTC, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 19 July 2024 at 04:09UTC, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 19 July 2024 at 04:09UTC, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 19:10 UTC on 19 July 2024

Jul 19, 20:13 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services. We have the following update for Microsoft:

Awareness - Virtual Machines

We are aware of an issue that started on July 18, which resulted in customers experiencing unresponsiveness and startup failures on Windows machines using the CrowdStrike Falcon agent, affecting both on-premises and various cloud platforms (Azure, AWS, and Google Cloud).

It’s important to clarify that this incident is separate from the resolved Central US Azure outage (Tracking Id: 1K80-N_8). Microsoft is actively providing support to assist customers in their recovery on our platforms, offering additional guidance and technical assistance.

CrowdStrike has released a public statement on Windows Sensor Update - crowdstrike.com addressing the matter, and it includes recommended steps for a workaround. For environments specific to Azure, further instructions are provided below:

Updated: We approximate impact started as early as 04:09 UTC on the 18th of July, when this update started rolling out.

Update as of 10:30 UTC on 19 July 2024:

We have received reports of successful recovery from some customers attempting multiple Virtual Machine restart operations on affected Virtual Machines. Customers can attempt to do so as follows:

Using the Azure Portal - attempting 'Restart' on affected VMs
Using the Azure CLI or Azure Shell (https://shell.azure.com)
https://learn.microsoft.com/en-us/cli/azure/vm?view=azure-cli-latest#az-vm-restart

We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage.

Additional options for recovery:

We recommend customers that are able to, to restore from a backup, preferably from before 04:09 UTC on the 18th of July, when this faulty update started rolling out.

Customers leveraging Azure Backup can follow the following instructions:
How to restore Azure VM data in Azure portal

Alternatively, customers can attempt repairs on the OS disk by following these instructions:
Troubleshoot a Windows VM by attaching the OS disk to a repair VM through the Azure portal

Once the disk is attached, customers can attempt to delete the following file:

Windows/System32/Drivers/CrowdStrike/C-00000291*.sys

The disk can then be attached and re-attached to the original VM.

We can confirm the affected update has been pulled by CrowdStrike. Customers that are continuing to experience issues should reach out to CrowdStrike for additional assistance.

Additionally, we're continuing to investigate additional mitigation options for customers and will share more information as it becomes known.

This message was last updated at 17:43 UTC on 19 July 2024

Jul 19, 18:11 UTC
Update - Whilst our Teams continue to work vigorously to restore all Bentley Systems and Services, CrowdStrike has released the following statement:

CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted. This is not a security incident or cyberattack.

The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.

We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels.

Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.

Jul 19, 11:25 UTC
Update - We are continuing to work on a fix for this issue.
Jul 19, 08:58 UTC
Identified - CrowdStrike Engineering has identified a content deployment related to this issue and reverted those changes.
Jul 19, 08:44 UTC
Update - We are continuing to investigate this issue.
Jul 19, 08:20 UTC
Update - We are continuing to investigate this issue.
Jul 19, 08:14 UTC
Update - Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 08:02 UTC
Investigating - Our team is currently investigating an issue with Bentley machines being down. Some users may be having trouble accessing some sites and the system may be getting restarts automatically or using certain features.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 07:53 UTC
Jul 19, 2024
Resolved - 12:15 UTC on 19 July 2024 – Services which were impacted by this outage recovered progressively and engineers from the respective teams intervened where further manual recovery was needed. Following an extended monitoring period, we determined that impacted services had returned to their expected availability levels.
Jul 19, 16:57 UTC
Update - We have identified the root cause of the issue, Due to a reported outage in the Central US region in azure, this is affecting our customers who's environments reside in this region along with the Bentley Hotline.

Azure are currently applying mitigation. Customers should continue to see increasing recovery at this time as residual and downstream impact mitigation progresses.

We will provide updates as we learn more and once a fix is implemented.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 08:55 UTC
Update - We have identified the root cause of the issue, Due to a reported outage in the Central US region in azure, this is affecting our customers who's environments reside in this region along with the Bentley Hotline.

Azure are currently applying mitigation. Customers should continue to see increasing recovery at this time as residual and downstream impact mitigation progresses.

We will provide updates as we learn more and once a fix is implemented.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 04:08 UTC
Update - We have identified the root cause of the issue, Due to a reported outage in the Central US region in azure, this is affecting our customers who's environments reside in this region along with the Bentley Hotline.
Azure are currently applying mitigation. Customers should see signs of recovery at this time as mitigation applies across resources in the region

We will provide updates as we learn more and once a fix is implemented.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 02:43 UTC
Update - We have identified the root cause of the issue, Due to a reported outage in the Central US region in azure, this is affecting our customers who's environments reside in this region along with the Bentley Hotline.

We will provide updates as we learn more and once a fix is implemented.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 01:50 UTC
Update - We have identified the root cause of the issue related to Microsoft Azure Central US region impacting hotline platform.
Due to a reported outage in the Central US region in azure, this is affecting our customers who's environments reside in this region along with the Bentley Hotline.

We will provide updates as we learn more and once a fix is implemented.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 19, 00:47 UTC
Identified - We have identified the root cause of the issue related to Microsoft Azure Central US region impacting hotline platform.

Our team is working on to resolve the issue and restore full functionality to Phone System Hotline as soon as possible. We will provide updates as we learn more and once a fix is implemented.

Jul 19, 00:22 UTC
Update - We are continuing to investigate this issue.
Jul 18, 23:55 UTC
Investigating - Our team is currently investigating an issue with our Phone System Hotline. Some users may be having trouble calling North America hotline for support.
We are working diligently to identify the root cause of the problem and implement a solution. We will provide an update as we learn more.
In the meantime, we apologize for any inconvenience this may cause and appreciate your patience and understanding.

Jul 18, 23:54 UTC
Jul 18, 2024
Jul 17, 2024

No incidents reported.

Jul 16, 2024

No incidents reported.

Jul 15, 2024

No incidents reported.

Jul 14, 2024

No incidents reported.

Jul 13, 2024

No incidents reported.