All systems are operational

About This Site

To help you better interpret the information presented here, please keep the following in mind:

Defining Hypervisors and Nodes

Interpreting Status Meanings

For VPS Customers

Important Notes

Past Incidents

21st March 2024

Los Angeles DC-02 OUTAGE - LAXSSD4031nerd6DC02

We are aware that LAXSSD4031nerd6DC02 is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: There appears to be a hardware related issue specific to this physical host. We are still working on it and will update with additional information once available.

UPDATE: This is still being worked on, we appreciate your patience.

UPDATE: A resolution path is underway and actively being implemented by our technicians. We are continuing to work on this matter, and we will update this status incident again once a full conclusion has been reached.

UPDATE: A resolution strategy has been identified and put into motion (affected clients were sent an email with details). We will be closing out this status incident at this time, we remain available 24x7 to assist. Please feel free to reach out to us via support ticket if you have any questions.

New York Datacenter OUTAGE - NYRN167KVM

We are aware that NYRN167KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

Strasbourg Datacenter Partial Outage - DataDock Strasbourg Facility

We are aware that multiple France nodes (FR100KVM, FR101KVM, FR102KVM, FR103KVM, FR104KVM, FR105KVM, FR106KVM, FR107KVM, FR108KVM, FR109KVM, and FR1000 Shared Hosting Server) are currently offline.

UPDATE: This is an issue with the DataDock Strasbourg facility, and servers themselves are healthy. There was a water leakage that got to a room with batteries. They had to power off one of the floors within the facility. Currently we are waiting for Electricians to clear/replace the batteries before power can be restored to that floor. This affected the located batteries in this room which forced the DataDock DC to turn off the power of the room where either our servers, or our network infrastructure might be located in.

None of our servers are affected by any water damage but are simply powered off for the meantime. The reason why your service is not accessible is because of parts of our network is affected.

UPDATE: As of now this is what DataDock states for ETA: "We cannot present a precise timetable for recovery at the moment. Preliminary estimates indicate that the failure is expected to last until the evening hours."

UPDATE: The water leakage caused damage to batteries of DataDock's UPS system. Due to failure of the electrical UPS system, this room currently cannot be supplied with electricity at this time.

Electrical specialists analyzed the situation and developed a plan to remove the damaged battery blocks from the UPS system. This is required to get the power restored. The work of removing the damaged batteries is currently being carried out, but it may take some time to complete. Therefore we cannot provide an ETA information now.

We assure you that our team is working diligently alongside with the DataDock facility to resolve the issue as quickly as possible.

We understand the inconvenience this may cause, and we sincerely apologize for any disruption to your services.

UPDATE: This is the latest update we received:

As next step, electrical specialists evaluate if and how the electrical infrastructure can be restored and power switched back on. This external vendor is working during the night on this.

Due to the complexity of this damaged UPS system, we cannot provide a concrete ETA information.

We assure you that our team is working diligently to resolve the issue as quickly as possible.

UPDATE: We are still awaiting additional news from DataDock. We've been told that the work is still actively being carried out.

UPDATE: This is the latest update we received:

Electrical contractors are still assessing the damage to the electrical infrastructure and how to safely restore the infrastructure and restore power for the affected floor.

We understand the inconvenience this has caused and we sincerely apologize for any disruption to your services.

We will keep you updated as we learn more about the situation and the progress of the restoration efforts. Thank you for your patience and understanding.

UPDATE: Electrical contractors have assessed that parts of the electrical infrastructure will need to be replaced in order to be safely restarted. It's estimated that this will take several days. We are working on executing an alternative plan to attempt to get things up sooner. We will share additional details within the next couple hours.

UPDATE: Physical access to the facility has been gained with the approval of the local authorities who are actively assessing the damage caused by the recent water leak. To avoid waiting further on electricians, our current plan of action is to relocate our affected infrastructure located on the 1st floor to the 2nd floor of the DataDock facility. Currently, we cannot give a reliable ETA as the racks and circuit runs are still being prepared by the facility operator. Once the new racks are ready, we will swiftly work towards relocating infrastructure accordingly. As a rough ETA we estimate that the relocation will take over the weekend to complete.

UPDATE: While we await the new cabinets to be allocated to us and energized, networking equipment is actively being prepared in anticipation for the new 2nd floor deployment.

UPDATE: Networking configuration is completed software wise, and the new racks are soon going to be energized. We will share additional updates later today.

UPDATE: Physical access has been gained to the new racks on the 2nd floor. We are still waiting for the circuit runs to be completed by the facility operator. In the meantime, we are beginning to rack the PDU's (power distribution units) and related networking equipment.

UPDATE: The installation of all Power Distribution equipment has been completed. We have also begun the process of dismantling servers from the affected 1st floor DC room and will begin transporting them to the 2nd floor.

UPDATE: We are making excellent progress and we will share additional updates later today.

UPDATE 3/25/24: Service has already been restored for approximately 10% of the original affected servers. If your service in France is already back online, you can assume that it is concluded and no further outages are expected for your service. If your service in France is still offline at this time, please assume that we are still working on it and know that while we cannot provide an exact ETA due to the sheer volume of affected servers within the DC, it is being worked on and we will provide additional updates within this status incident once available.

Also, going forward we will be adding dates to these status updates to ensure clarity.

UPDATE 3/25/24 #2: Service is now restored for approximately 75% of the original affected servers. If your service in France is already back online, you can assume that it is concluded and no further outages are expected for your service. If your service in France is still offline at this time, please assume that we are still working on it and know that while we cannot provide an exact ETA due to the sheer volume of affected servers within the DC, it is being worked on and we will provide additional updates within this status incident once available.

UPDATE 3/25/24 #3 - CONCLUSION: All of our affected infrastructure has been successfully relocated from the 1st floor to the 2nd floor, and servers are now back online. At this point, if your service in France is still offline, it may be an isolated issue, so if you are still facing issues please open a support ticket and we can investigate/troubleshoot accordingly.

We will now be closing out this incident as resolved. Our team remains available 24x7 to assist our customers with any individual/isolated issues that may arise. After the dust has settled, we will be reaching out to our customers separately via email with a follow-up / more details. In order to allow our team to prioritize resolving any isolated cases stemming from this incident, we request that you refrain from opening support tickets for requests for information (RFI) or reason for outage (RFO) until further notice. However, don't hesitate to contact us if your service is still offline, as everything should be back online at this point.

20th March 2024

Dallas Datacenter OUTAGE - DAL111KVM

We are aware that DAL111KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

19th March 2024

No incidents reported

18th March 2024

New York Datacenter OUTAGE - NYRYZEN102

We are aware that NYRYZEN102 is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

San Jose Datacenter Outage - SJ142KVM

We are aware that SJ142KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

Dallas Datacenter Outage - DAL123KVM

We are aware that DAL123KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

17th March 2024

Dallas Datacenter Outage - DAL107KVM

We are aware that DAL107KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

New York Datacenter Outage - NYRN115KVM

We are aware that NYRN115KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

New York Datacenter OUTAGE - NYRYZEN102

We are aware that NYRYZEN102 is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

San Jose Datacenter Outage - SJ157KVM

We are aware that SJ157KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.

16th March 2024

Chicago Datacenter Outage - CHI116KVM

We are aware that CHI116KVM is currently experiencing connectivity issues. We are currently looking into this node further and will provide additional updates on this matter once we have more information.

UPDATE: This is now resolved. If you are still experiencing any issues please open a support ticket and we'd be happy to assist.