29th June 2025

Los Angeles DC-02 OUTAGE - Los Angeles DC-02 Datacenter

We are aware that there is currently an outage in our Los Angeles DC-02 datacenter location impacting some of our nodes.

Our engineering team is currently looking into this further. We will have more information on this shortly.

UPDATE: We are in contact with our upstream provider in Los Angeles DC-02 (Multacom) -- this is currently being investigated. We will share additional updates once available.

UPDATE 7:40 AM PDT:

We’ve narrowed down the outage to a specific section within the Los Angeles DC-02 datacenter. Our datacenter provider has confirmed the issue is isolated to a particular segment of the facility, and building engineers are currently investigating further on-site.

Additionally, some bare metal customers may be affected if their hardware is located within the impacted area.

We are continuing to monitor the situation closely and will provide additional updates as they become available. Thank you for your patience and understanding.

UPDATE 8:03 AM PDT:

Our datacenter provider in Los Angeles DC-02 has confirmed that the facility is currently experiencing an A/C (cooling) outage, which is impacting network infrastructure such as the core network routers. As a result, we are now seeing additional devices in this location affected (in terms of network connectivity).

Building engineers and facility operations staff are actively working with external vendors to stabilize the cooling environment and restore normal conditions. We are continuing to monitor the situation closely and will share further updates as they become available.

UPDATE 8:41 AM PDT:

Building engineers, facility staff, and external vendors remain actively engaged in resolving the underlying cooling issue. It appears to be a building-level chiller issue, affecting multiple tenants within the Aon Center building in downtown Los Angeles.

Both our team and the datacenter provider (Multacom) are treating this incident with the highest priority, and we are pushing for timely resolution.

UPDATE 10:00 AM PDT:

Building engineers and vendors are on-site and working on the issue. This remains a top priority, and we’ll provide further updates as they become available.

UPDATE 10:42 AM PDT:

Building engineers remain on-site working to restore the building’s chillers. In addition, our datacenter provider (Multacom) has ordered portable A/C units in an effort to help expedite service restoration. We will share additional updates once available.

UPDATE 11:18 AM PDT:

We are beginning to see some affected servers and network devices come back online as cooling efforts continue. Work is still ongoing, and service restoration may vary across systems depending on location and infrastructure dependencies.

We will continue to provide updates as progress is made. Thank you for your patience and understanding.

UPDATE 1:01 PM PDT:

A large majority of devices are already back online and we are downgrading this status incident to a partial outage status at this time. We are actively working on the remaining. If your service is not yet up, rest assured we are aware and working on those remaining servers. Thank you for your patience as we work through this incident.

UPDATE 1:20 PM PDT:

Good news, the vast majority of nodes have already been restored and are back online. At this time, we are left with just a handful of servers still affected by this incident. At this time, ~18 physical nodes remain down, and these are currently being checked and worked on individually.

The following servers are still being worked on by the on-site datacenter team:

LAXSSD890nerd2DC02, LAXSSD930nerd2DC02, LAXSSD7001nerd2DC02, LAX011nerd4dc02WIN, LAX012nerd4dc02WIN, LAXSSD102nerd1DC02, LAXSSD5002nerd3DC02, LAXSSD7000nerd2DC02, LAXSSD5007nerd3DC02, LAXSSD5009nerd3DC02, LAXSSD4025nerd6DC02, LAXSSD7004nerd2DC02, LAXSSD6020nerd7DC02, LAXSSD6023nerd7DC02, LAXSSD3009nerd3DC02, LAXSSD3018nerd3DC02, LAXSSD3019nerd3DC02, LAXSSD5010nerd3DC02.

We will continue to provide additional updates as progress is made. Thank you for your continued patience.

UPDATE 1:22 PM PDT:

Building chillers have been restored successfully, and as a result, the previously deployed portable A/C units are no longer required. Cooling conditions within the facility are now stabilizing.

We are still working on checking the remaining individual devices that are still offline, with the on-site datacenter team. We will provide additional updates as progress is made.

UPDATE 1:40 PM PDT:

Recovery efforts have progressed significantly. We are now down to single digits in terms of physical devices still impacted.

The remaining affected nodes are: LAXSSD4025nerd6DC02, LAXSSD6020nerd7DC02, LAXSSD930nerd2DC02, LAXSSD7000nerd2DC02, LAXSSD5002nerd3DC02, LAXSSD5010nerd3DC02, LAXSSD3009nerd3DC02, LAXSSD3018nerd3DC02, LAXSSD3019nerd3DC02.

All shared/reseller hosting servers are back online as well.

Our team is actively working on bringing these systems back online. Further updates will be shared as progress continues.

UPDATE 2:06 PM PDT:

We are pleased to report that all nodes are back online with the exception of 3x physical nodes (KVM VPS nodes) that remain affected: LAXSSD930nerd2DC02 LAXSSD7000nerd2DC02, and LAXSSD5002nerd3DC02

While the rest of the affected systems have come back online successfully, these 3x nodes did not come back online after power restoration. As a result, they will require hands-on hardware checks by a technician.

This may take several hours before a full conclusion is reached. With regards to these last three nodes, we will share updates as we receive more information. Thank you again for your patience.

UPDATE 7:57 PM PDT:

We are still actively working on a resolution path for the 3 remaining nodes. Additional updates to follow shortly.

UPDATE 9:25 PM PDT:

We've determined a resolution path for LAXSSD7000nerd2DC02 and customers on this node have been sent an e-mail with details.

As for the last two nodes, our team is working on it still (LAXSSD930nerd2DC02 and LAXSSD5002nerd3DC02).

At this stage, it may be a while longer before we can reach a full conclusion on these two remaining nodes. We sincerely thank you for your continued patience and understanding as we work through this.

UPDATE 10:21 PM PDT:

We are currently investigating a potential issue with 1 additional host node: LAXSSD5009nerd3DC02. This was recently identified during ongoing checks and is now being looked into.

We will share further details as soon as more information becomes available. Thank you for your continued patience.

FINAL UPDATE:

We are pleased to confirm that recovery efforts have concluded on this incident, and all affected customers are now back online.

Customers with recoverable VPS instances have had their original services restored successfully. For the small number of cases where recovery was not possible (due to physical node failure), replacement VPS instances have been provisioned and customers have already been contacted via email with full details. Specifically, 3 physical nodes were impacted to the point where replacements were necessary.

At this point, all services should be operational — either in their original form or via a replacement VPS.

If you're still seeing issues as of now, we encourage you to check your SolusVM panel and try rebooting your VPS. For anything else, feel free to open a support ticket — our team remains fully available to assist.

While our staff is still actively working through an unusually high volume of support tickets, we are committed to ensuring each request is addressed with care. We greatly appreciate your patience and understanding during this incident, and in the days ahead.

Thank you again for your support.