We are aware of the continued service disruptions/intermittent availability affecting the SJ196KVM host node.
Our engineering team has been actively troubleshooting this issue; however, it appears that the most recent mitigation steps did not fully resolve the underlying problem. We are continuing to investigate the root cause and are now working toward a permanent resolution.
This includes evaluating additional corrective actions, up to and including migrating impacted virtual machines to an alternate, stable host node if necessary to ensure long-term service reliability.
We will share further updates as soon as more information becomes available or once a definitive remediation path is confirmed. We appreciate your patience while our team works to fully resolve this issue.
UPDATE: This issue is still actively being worked on by the on-site datacenter team. Additional updates will be provided shortly. Thank you for your continued patience.
UPDATE: Our team is actively working on transplanting the underlying storage drives to another physical, new host node. The goal of this process is to restore all affected virtual machines in a seamless manner, with services coming back online exactly as they were prior to the disruption.
As part of this remediation, the workloads will also be brought up on newer, higher-performing hardware to ensure improved stability and long-term reliability.
We will provide another update as soon as the drive transplant and validation process has been completed. Thank you again for your continued patience while we finalize this work.
UPDATE: The transplant process has now been completed, and all affected virtual machines have been successfully brought back online.
At this time, all VPS hosted on SJ196KVM are operating normally, and we are closely monitoring the environment. We are cautiously optimistic that this remediation will fully resolve the issues previously impacting SJ196KVM, as the workloads are now running on new, healthy hardware (for transparency, while the existing drives and RAID controller remain in place, all other components, including the motherboard, CPU, memory, and other hardware, have been replaced with new equipment). Initial validation checks look good, and performance and stability are within expected parameters.
We will continue to monitor the node closely and will proactively address anything out of the ordinary should it arise. Thank you for your patience and understanding while we worked to fully resolve this issue.