The best way to keep your digital oilfield secure is to have strong security framework and a good understanding of data flows, so everybody knows what data exists, where it needs to flow, where the security risks are and how they are being managed, says Justin Lowe, energy expert with PA Consulting Group.
Understanding what data exists, where it needs to flow, and where your security risks are, is the key to keeping your digital oilfield implementation secure, said Justin Lowe.
He was speaking at the Digital Energy Journal / Finding Petroleum London conference, “developments with digital oilfield IT infrastructure”.
Once designed, everybody needs to understand and work with your security systems, including company staff, contractors and vendors from telecom companies, service companies, and security companies, so you have an effective end to end security framework.
Once you understand your risks, you can look at where you can reduce the risk easily, for example by turning off certain functionality, switching off less important connections, or separating networks. You can also need to identify longer term security measures and be better prepared for any incidents which do occur.
You can help staff to be better aware of the risks. “Security has never been a part of engineering training, it is only coming into some engineering courses now,” he said.
For new projects it is essential to get security right at the early stages, he said. “Trying to bolt on security afterwards is expensive and not very effective. Most projects are commissioned as completely separate projects and bolted together afterwards and that's where the integration becomes a nightmare.”
And all of this needs governance, with policy and standards. “Someone needs to ensure that all of these bits have the ongoing management that's needed in terms of patching and hardening,” he said.
Some companies do digital oilfield security successfully, developing secure ways to transfer real time data on the rig about drilling and production into a secure, central business environment, and providing the data externally, where required, he said.
Some exploration and production companies collect real time data from the various service company systems at the rigsite, aggregate it and provide it to collaboration environments. Other companies get the service companies to send the data back to their own base and from there share with the operating company.
But many things are proving very hard. Remote condition monitoring is proving a particular headache. Companies offer services to monitor their own equipment continually, but “each of the condition monitoring services want to do this in a different way,” he said. “Trying to connect a vendor to lots of different plant securely is quite a challenge when you get down to it.”
Modern drilling rigs might have 10 or so industrial computer systems calling for remote connectivity for troubleshooting or data transfer. “This can be a bit of a nightmare to secure,” he said.
Control systems + business networks
A lot of recent problems stem from the increasing desire to integrate plant control systems with corporate networks, which a lot of company data security people initially said should not even be attempted.
Many plant systems were never designed to connect to anything else.
In the past, the only data systems for drilling were an isolated control system on the drilling rig, and various separate systems operated by service companies (for example mud logging, MWD/LWD). There were phone calls and faxes, and an occasional floppy disk, going to shore.
Now there is much more computer controlled equipment, including dynamic positioning systems, rig control and condition monitoring systems, and they are often connected together. “It is standard TCP- IP, Ethernet connections, and wireless you might find in any sort of office,” he said.
Meanwhile most company IT security policies were designed for office environments, not industrial ones. Some corporate IT departments have tried to make the same security demands for offshore users as they do for users in the office, and they don’t know much about plant.
Standard IT security policies commonly require companies to make sure their computers have the latest patches installed, a reasonable request for ‘normal’ business systems but something very difficult in the industrial environment. “It’s an operational nightmare and often doesn't get done because it is too costly, too complex and sometimes can’t be done because vendors don’t allow systems to be patched.”
Drilling environments seem to have a worse record than production environments with IT security, he said, particularly with viruses being transmitted on a USB stick.
Service companies increasingly want to get real time data back from the rig to their customers, accessible via an online portal. The open standards for drilling and production data, WITSML and PRODML, encourage connectivity is a “brilliant thing on one side but there are some issues on the downside,” he said, “if it makes the systems easier to hack into or open them up to malware.”
Accidents and mistakes
Many security problems are due to staff mistakes and accidents. Some examples of accidents and screw-ups Mr Lowe has seen include:
a virus getting into drilling rig dynamic position system (so a semi-submersible rig was not able to control its position)
a worm disabling a safety critical drilling control management system (it wasn't drilling at the time so not a real issue, but it could have been very different, Mr Lowe said)
a service company providing the wrong client’s data through a web stream (which caused a lot of embarrassment if no real problems)
a worm getting into a fiscal metering system (there was no impact on the control system and standard operations, but there was a loss of metering information)
a disgruntled employee disabling a pipeline monitoring system
worms (from computer viruses) can also fill up your satellite communications link with data so operational data doesn’t get through, he said.
Problems can occur through poor configuration management as well as viruses. In one example, someone uploaded a software patch to the wrong programmable logic controller at the end of a crude pipeline, resulting in an oil spill. The person thought the patch was being uploaded to the PLC they we standing next to, and only realised they weren’t when they saw the light wasn’t flashing.
More recently, there have been two well publicised virus attacks on plant, named NightDragon and StuxNet.
NightDragon was an attack specifically targeted at a number of oil and gas companies to steal information in different ways, including hacking into information available through web servers and “spear fishing”. This is the sending of targeted e-mails to specific individuals, with content to encourage them to open attachments and install software on their laptops, which would then enable the hacker to steal information on their machine, and attack other systems which the user has access to.
“This was quite sophisticated, with multiple attack mechanisms over a matter of years,” he said.
When people are trying to hack into online information or get you to install software on your computer, “antivirus and patching are very little use,” he said.
The StuxNet worm, was thought to have been specifically designed to attack the Iranian nuclear industry, but caused wider problems. “I’ve seen a number of oil and gas companies that got infected,” he said.
StuxNet was thought to have originally entered the system by being somehow installed on USB sticks used by plant engineers. The software on the USB stick could then exploited multiple different previously unknown vulnerabilities of Windows, which had actually been discovered by whoever wrote the malware – no patches had been available. The code on the malware was programmed to look around the network for Siemens distributed control systems (DCS), and if it finds one, to install itself on those systems and in the programmable logic controllers (PLCs). It would then hide itself so no-one could see that the worm was there.
“It was really sophisticated,” he said. “This isn't a fourteen year old hacker on diet coke in his bedroom.”
StuxNet didn’t have a big impact on most people, because it was specifically targeted at a certain control system which most people don’t have. But if the same energy and been put into targeting a different system it could have caused much more problems.
Mr Lowe suggests that anyone who has industrial control systems should make sure they know the signs of infection or compromise.
You can monitor the control systems for suspicious activity. There are commercially available services for doing this if the skills are not in house. “It shouldn't be a big effort but is worthwhile for the peace of mind,” he said.
“It is no longer a matter of [just] protecting against viruses, CDs, USB keys or accidental screw-ups,” he said. “There are people out there targeting oil and gas companies, targeting industrial control systems, and they are getting much more sophisticated.”
The control system vendors have been doing a lot of work to improve their security systems but there are still vulnerabilities, he said. The StuxNet worm has prompted a researcher, good and bad, to search for vulnerabilities in control systems.
“A recent notification from Canadian security services showed there are fourteen different vulnerabilities identified in industrial control system software recently. Some are patchable; some aren't and are still vulnerable,” he said.
Given these developments owners and operators of industrial control systems need to be aware of these risks and ensure they have effective security frameworks in place.
Justin Lowe is an energy expert at PA Consulting Group
For more information on cyber security, contact us now or click here.
To visit PA's energy pages, please click here.