A European power supply company experienced an unscheduled power outages impacting a large amount of their customer base.

Upon investigation and analysis of the network it was determined that malware was the root cause of the outages. So how did the malware get into their network and what caused the major outages that impact customers.

  1. Recon:
    Prior to the hack a sustained and prolonged spear phishing campaign was launched and targeted the IT Staff and Sysadmins. The phishing emailed contained a Word Document with embedded macro’s which when enabled installed the  known Malware Variant. The word was socially engineered to ensure that a sysadmin would be willing to open it.
  2. Exploit:
    Once exploited the malware was installed on the corporate network of one of the power companies. Over many months the hackers were able to recon the networks. Following the recon they hackers were able to gain access to the Active Directory and started to harvest user account information. The superusers the hackers targeted had access to the SCADA VPN.

THEY WERE IN !!!

Once they established connection to the SCADA network the following steps were taking to disable the overrides that acted as a form of protection.

  1. They re-configured the UPS systems in place at the Operation Center used to monitor the power grid. Once the malware executed and the grid went offline, the UPS would also be offline and therefore disabling the SOC and the operators’ ability to re-act.

  2. The firmware as re-written for all the serial to Ethernet convertors used in the power sub-stations. These serial to Ethernet convertors were used to control the override switches used at the substations. Taking out these convertors meant that a manual intervention would be required at each sub-station therefore prolonging the outage.

They have completed all the ground work and now it was time for the execution.

  1. In the operator center an operative noticed his machine was being remotely controlled, the mouse moved through the software and started to shutdown the overrides and closed down the power grid.

  2. Once they had completed the actions to shutdown the grid. The hackers executed a KillDisk script on the operators’ machines in the SOC to kill the OS and BIOS. This effectively killed all the machines at SOC and increased the reaction time.

  3. The Grid was offline, the UPS was offline, the firmware for all 16 core serial to Ethernet devices were over-written, the SOC was offline and the call center was disabled for customer outreach.
  • The investigation took approx. 3 months following the attack. This is a pretty short time frame for an investigation of this scale but unlike most CNI the Company’s had invested heavily in logging equipment. SANS Institute assisted in the investigation and it was spear headed by Robert M Lee and Michael J Assante.

  • Investigators stated that unlike US CNI operators where information is limited, they had access to Firewall, System logs and some network traffic.

How could this have been avoided…

with Chenega BreachDetect

Find out more about Chenega BreachDetect

Request our Security Prospectus

Training, Security, Intelligence, Assessment & Analysis