Evolving edge practical guides – Novotek Ideas Hub https://ideashub.novotek.com Ideas Hub Fri, 28 Oct 2022 13:59:54 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.11 https://ideashub.novotek.com/wp-content/uploads/2021/03/Novotek-logo-thumb-150x150.png Evolving edge practical guides – Novotek Ideas Hub https://ideashub.novotek.com 32 32 What SCADA Evolution Means for Developers https://ideashub.novotek.com/what-scada-evolution-means-for-developers/ Fri, 28 Oct 2022 13:58:37 +0000 https://ideashub.novotek.com/?p=3296

If you’ve walked through factories and seen operator or supervisor screens like the one below, you’re actually seeing both the best and worst aspects of technology evolution! Clearly, no data is left hidden within the machine or process, but screen design looks to have been driven by the abililty to visualiase what’s available from the underlying controls, rather than a more nuanced view of how to support different people in their work. you could say that the adoption of modern design approaches to building a “good” HMI or SCADA application has lagged what the underlying tools can support.

One place to configure & manage for SCADA, Historian, Visualisation

In Proficy iFIX, GE Digital has incorporated a mix of development acceleration and design philosophies that can both lead to more effective user experiences with a deployed system, while also making the overall cost of building, maintaining, and adapting a SCADA lower.

Three critical elemetns stand out:

1. Model-centric design

This brings object-oriented developement principles to SCADA and related applications. With a “home” for standrad definitions of common assets, and their related descriptibe and attribute data, OT teams can create reusable application components that are quick to deploy for each physical instance of a type. The model also provides useful application foundations, so things like animations, alarm filters and so on can be defined as appropriate for a class or type – and thereofore easily rolled out into the screens where instances of each type are present. And with developments in the GE site making the model infrastructure available to Historain, analytics and MED solutions, work done once can defray the cost and effort needed in related programs.

2. Centralised, web-based administation and development

In combination with the modelling capability, this offers a big gain in productivity for teams managing multiple instances of SCADA. With common object definitions, and standard screen templates, the speed at which new capabilites or chages to exisiting footprints can be built, tested, and rolled out means a huge recovery of time for skilled personnel.

3. The subtle side of web-based clients

Many older application have large bases of custom scripting – in many cases to enable interaction with data sources outside the SCADA, drive non-standard animations, or to enable conditional logic. With the shift to web-based client technology, the mechanics for such functions are shifting to more configurable object behaviours, and to server-side functions for data integrations. These mean simipler, more maintainable, and less error prone deployments.

Taking advantage of what current-generation iFIX offers will mean a different development approach – considering useful asset and object model structure, then templating the way objects should be deployed is a new starting point for many. But with that groundwork laid, the speed to a final solution is in many (most!) cases, faster than older methodologies – and that’s beofer considering the advantage of resuability across asset types, or across multiple servers for different lines or sites.

Recovered time buys room for other changes

With rich automation data mapped to the model, and faster methods to build and roll out screen, different users can have their views tailored to suit their regualr work. Our earlier screen example reflected a common belief that screen design is time-consuming, so best to put as much data as possible in one place so that operators, technicicans, maintenance and even improvement teams can all get what they need without excessive development effort. But that can mean a confused mashup of items that get in the way of managing the core process, and in turn actually hamper investigations when things are going wrong.

But where development time is less of a constraint, more streamlined views can be deployed to support core work processes, with increasing levels of detail exposed on other screen for more technical investigation or troubleshooting. Even without fully adopting GE Digital’s Efficient HMI design guidelines, firms can expect faster and more effective responses form operators and supervisors who don’t have to sift through complex, overloaded views simplu to maintain steady-state operators.

With significant gains to be had in terms of operator responsiveness, and effective management of expectations, the user experience itself can merit as much consideration as the under-the-bonent changes that benefit developers.

Greenfield vs. Brownfield

It may seem like adopting a model-based approach, and taking first steps with the new development environments would be easier on fresh new project, whereas an upgrade scenario should be addressed by “simply” porting forward old screens, the database, etc. But when you consider all that can be involved in that forward migration, the mix of things that need “just a few tweaks” can mean as much – or more – work than a fresh build of the system, where the old serves as a point of reference for design and user requirements.

The proess database is usually the easiest part of the configuration to migrate forward. Even if changing from legacy drivers to IGS or Kepware, these tend to be pretty quick. Most of the tradeoffs of time/budget for an overall better solution are related to screen (and related scripting) upgrades. From many (many!) upgrades we’ve observed our customers make, we see common areas where a “modernisation” rather than a migration can actully be more cost effective, as well as leaving users with a more satisfying solution.

Questions to consider include:

While there is often concen about whether modernisation can be “too much” change, it’s equally true that operators genuinely want to support their compaines in getting better. So if what they see at the end of an investment looks and feels the same way it always has, the chance to enable improvements may have been lost – and with it a chance to engage and energise employees who want to be a part of making things better.

Old vs. New

iFIX 2023 and the broader Proficy suite incorporating more modern tools, which in turn offer choices about methods and approahces. Beyond the technical enablement, enginerring and IT teams may find that exploring these ideas may offer benefit in areas as straightforward as modernising system to avoid obsolescene risk to making tangile progress on IoT and borader digital initiatives.

]]>
https://ideashub.novotek.com/3290-2/ Mon, 24 Oct 2022 09:46:04 +0000 https://ideashub.novotek.com/?p=3290 One of the advantages of managing technology assets is that you can do things with them beyond “just running them”, such as, keeping track of them and repairing them! Optimising a productions process for efficiency or utility usage if often a matter of enhancing the code in a control program, SCADA, or related system, so the tech assets themselves can be the foundation for ongoing gains. And similarly, as customer or regulatory requirements for proof of security or insight into production processes increase, the tech assets again become the vehicle to satisfy new demands, rather than re-engineering the underlying mechanical or process equipment.

It’s this very adaptability that makes version control around the configurations and programs valuable. As configurations and programs change, being sure that the correct version are running is key to sustaining the improvements that have been built into those latest releases. With that in mind, a good technology asset management program, such as octoplant will have version control as a central concern.

Whether deploying solutions in this area for the first time, or refreshing an established set of practices, it’s worthwhile to step back and evaluate what you want version control to do for you – operationally, compliance-wise and so on. And from that, the capabilities needed from any tools deployed will become clearer. With that in mind, we’ve noted some of the key areas to consider, and the decision that can come from them. We hope this helps you set the stage for a successful project!

Decide How to Deeply Embed Version Control

 We take VPNS, remote access and web applications for granted in a lot of ways – but this combination of technology means that it’s easier than ever to incorporate external development and engineering teams into your asset management and version control schemes. Evaluate whether it makes sense to set up external parties as users of your systems, or if it makes more sense to have your personnel manage the release and return of program / configuration files. The former approach can be most efficient in terms of project work, but it may mean some coordination with IT, to ensure access is granted securely. Either way, setting your version control system to reflect when a program is under development by other can ensure you have a smooth process for reincorporating their work back into your operation.

Be Flexible About the Scope of What Should be Version-Controlled.

Program source codes and configurations are the default focus of solutions like octoplant. Yet we see many firms deploying version control around supporting technical documentation, diagrams, even SOP (Standard Operating Procedure) documents relating to how things like code troubleshooting and changed should be done.

Define Your Storage and Navigation Philosophy. 

In many cases, this can be a very easy decision – set up a model (and associated file storage structure) that reflects your enterprise’s physical reality, as illustrated below. This works especially well when deploying automated backup and compare-to-master regimens, as each individual asset is reflected in the model.

However, some types of business may find alternatives useful. If you have many instances of an asset where the code base is genuinely identical between assets, and changes are rolled out en masse, and automated backup and compare is not to be deployed, it can make sense to think of a category-based or asset-type-specific model and storage scheme.

It may be that a blended approach make sense – where non-critical assets and programs may have variance both in their automation, and therefore in the program structure, an enterprise model can make sense. But in some industries (food, pharma, CPG), it can be common to maintain identical core asset types, and associated automation and process control. So having some category / type-based manager versions can be useful, too.

Reporting and Dashboards – Version Control Data is Not Just for Developers and Engineers.

A robust solution will track actions taken by different users in relation to different asset’s code bases, and in relation to any automated comparisons. This means you can have a rich audit trial that can certainly be used to ensure disciplines are being followed, but it also means that you can easily support any regulatory or customer requirements for data. And with a model of your operation reflecting the different makes models, variants and generation of tech assets, you’ll have a tech inventory at your fingertips that can make reinvestment and replacement planning much more efficient. So, make sure your plan to share dashboards and reports reflects the different people in your organisation who could use their own view of the tech assets, and the programs running in them.

If you’d like to learn more about the work we do with our customers on technology asset management, you can get in touch here; or ring us on +44 113 531 2400

]]>
Are your PLCs an easy target? A mindset shift can significantly reduce PLC firmware vulnerabilities https://ideashub.novotek.com/are-your-plcs-an-easy-target-reduce-plc-firmware-vulnerabilities/ Thu, 25 Nov 2021 14:06:48 +0000 https://ideashub.novotek.com/?p=2917

Since the beginning of the COVID-19 pandemic, businesses across the UK have faced a surge in cybercrime. In fact, research indicates that UK businesses experienced one attempted cyberattack every 46 seconds on average in 2020. Industrial businesses are a prime target for hackers and the ramifications of a data breach or denial-of-service attack are far-reaching, making system security imperative. Here, David Evanson, corporate vendor relationship manager at Novotek UK and Ireland, explains how industrial businesses can keep their vital systems secure.

For many business leaders and engineers, it is still tempting to consider large multinational companies or data-rich digital service providers to be the prime target for hackers. However, the growing volume of cyberattacks on businesses globally show that any company can be a target of malicious attacks on systems and services.

According to research by internet service provider Beaming, there were 686,961 attempted system breaches among UK businesses in 2020, marking a 20 per cent increase on 2019. Of these attacks, Beaming noted that one in ten intended to gain control of an Internet of Things (IoT) device — something that indicates a tendency to target system continuity rather than conventional data.

Both factors together are cause for alarm among industrial businesses of all sizes. Hackers are targeting all manner of companies, from start-ups to global organisations, and focussing more on the growing number of internet-connected devices and systems that were previously isolated.

The consequences of a device being compromised range from data extraction to service shutdown, and in any case the financial and production impacts to an industrial business are significant. There is no single quick fix to bolster cybersecurity due to the varying types of hacks that can take place. Some cyberattacks are complex and sophisticated; others less so. Many attacks on devices tend to fall into the latter category, which means there are some steps industrial businesses can take to minimise risk.

Novotek has been working closely with industrial businesses in the UK and Ireland for decades. One common thing that we have observed with automation hardware and software is that many engineers do not regularly upgrade software or firmware. Instead, there is a tendency to view automation as a one-off, fit-and-forget purchase. The hardware may be physically maintained on a regular schedule, but the invisible software aspect is often neglected.

GE Fanuc Series 90-30

Older firmware is more susceptible to hacks because it often contains unpatched known security vulnerabilities, such as weak authentication algorithms, obsolete encryption technologies or backdoors for unauthorised access. For a programmable logic controller (PLC), older firmware versions make it possible for cyber attackers to change the module state to halt-mode, resulting in a denial-of-service that stops production or prevents critical processes from running.

PLC manufacturers routinely update firmware to ensure it is robust and secure in the face of the changing cyber landscape, but there is not always a set interval between these updates.

In some cases, updates are released in the days or weeks following the discovery of a vulnerability — either by the manufacturer, Whitehat hackers or genuine attackers — to minimise end-user risk. The firmware version’s upgrade information should outline any exploits that have been fixed.

However, it’s important to note that legacy PLCs may no longer receive firmware updates from the manufacturer if the system has reached obsolescence. Many engineers opt to air-gap older PLCs to minimise the cybersecurity risk, but the lack of firmware support can also create interoperability issues with connected devices. Another part of the network, such as a switch, receiving an update can cause communications and compatibility issues with PLCs running on older versions — yet another reason why systems should run on the most recent software patches.

At this stage, engineers should invest in a more modern PLC to minimise risk — and, due to the rate of advancement of PLCs in recent years, likely benefit from greater functionality at the same time.

Firmware vulnerabilities are unavoidable, regardless of the quality of the PLC. At Novotek, we give extensive support for the Emerson PACSystems products that we provide to businesses in the UK and Ireland. This involves not only support with firmware updates as they become available, but also guidance on wider system resilience to ensure that businesses are as safe as possible from hardware vulnerabilities. The growth in cyberattacks will continue long beyond the end of the COVID-19 pandemic, and infrastructure and automation are increasingly becoming targets. It may seem a simple step, but taking the same upgrade approach to firmware that we do with conventional computers can help engineers to secure their operations and keep running systems safely.

]]>
Bridging the connectivity gap https://ideashub.novotek.com/bridging-the-connectivity-gap/ Mon, 06 Sep 2021 10:18:03 +0000 https://ideashub.novotek.com/?p=2860

In the age of connectivity, there is no shortage of useful information that engineers can leverage to optimise and improve operations. Everything from the speed of motors to the weather forecast can influence production. However, bringing these data sources together in a secure way is a challenge faced by many engineers. Here, George Walker, managing director of Novotek UK and Ireland, explains how engineers can bridge the gap between local process data and external data sources.

The Internet of Things (IoT) may still be a relatively new concept for many consumers and professional service businesses, but the idea of machine-to-machine communication and connectivity is nothing new for industry. In fact, it’s been more than 50 years since the programmable logic controller (PLC) first became popular among industrial businesses as a means of controlling connected systems.

The principle behind the PLC is quite simple: see, think and do. The controller will ‘see’ what is happening in a process based on the input data from the connected devices and machines. The PLC then processes this input and computes if any adjustments are required and if so, it signals these commands to the field devices. Traditionally, the field devices that could be controlled was limited, but recent developments in sensor technology have made specific components and resources much more measurable.

For example, if a water tank is almost at full capacity in a food processing plant, data from connected sensors can feed that information to a PLC. The PLC then sends the signal for the valve to close once the water volume exceeds a certain threshold, which prevents overflow. This is a simple control loop that sufficiently meets the need of the process.

Unfortunately, even as edge computing and PLC technology has advanced and offered more sophisticated data processing and control at the field-level, many plant engineers continue to setup their devices in this way. In reality, modern edge devices and industrial PCs (IPCs) are capable of providing much greater control, as well as responding to external commands or variables that were previously beyond the scope of control systems.

The outer loop

While the idea of the Industrial IoT (IIoT) is predominately a means of branding modern connectivity, the wider Industry 4.0 movement has brought with it some valuable advancements in edge and PLC technology. Among these advancements is the potential for on-premises automation and control systems to not only connect with local devices in an inner loop, but to draw from external sources: an outer loop.

The outer loop can take several forms, depending on what is most applicable or relevant to a process or operation.

For example, some more digitally mature businesses might have outer loops that feature an enterprise resource planning (ERP) system, supply chain management software or a wider manufacturing execution system (MES). These systems will share and receive relevant information or send required adjustments — such as due to raw material intake or low stock — to an edge device, which feeds into the inner loop. This allows industrial businesses to make use of more comprehensive data analysis than can be achieved in local data systems.

Alternatively, an outer loop could draw from data sources that are completely external to a plant’s operations. For example, a wind farm operator could use an outer loop that drew from sources of meteorological data for wind forecasts. This could inform the optimum pitch and yaw of a turbine, controlled by a field device.

Another example, and one that will resonate with many industrial businesses, is energy price. The cost of power from the electrical grid fluctuates throughout the day, which might mean that on-site generation — such as solar panels or heat recovery processes — become more economical during times of peak grid demand. An outer loop can communicate this data efficiently to the relevant systems in a business, and changes can then be enacted that allow the business to reduce energy costs.

Establishing secure connection

Clearly, there is a benefit for industrial businesses to establish both inner and outer loops. However, there is one barrier to deployment that most engineers encounter: hardware limitations.

Traditional PLCs were designed in a rather utilitarian manner to complete control functions effectively and in a straightforward manner. This no-frills approach persists even with modern PLCs — even with today’s technical specifications, most PLCs are not designed in a way that struggles to handle much more than a real-time operating system and some control applications.

Attempting to set up such a PLC to interact with an outer loop would either not work at all or severely hinder performance and risk failure.

Engineers can tackle this problem by introducing a separate gateway device that serves as an intermediary between the outer loop and the inner loop. However, this is a somewhat inelegant solution that requires investment in additional devices, which will require ongoing maintenance and introduce yet another device into already large system networks. Across an entire site, this quickly becomes costly and complicates network topologies.

A better solution is an unconventional one. It is possible to set up a modern automation controller in such a way that it breaks the conventions of PLCs, as long as the device is capable of multi-core processing at pace. From Novotek’s perspective, one of the best modern units that meet this need is Emerson Automation’s CPL410 automation controller.

The CPL410 can split inner and outer loop processing between its multiple processor cores. The inner loop and PLC processes can run from a single core, while another core — or even a group of cores, depending on complexity — can run more sophisticated containerised applications or operating systems. Additional cores can broker between the inner and outer loops, ensuring reliability and security.

A multi-core setup is useful because it allows the PLC processes and gateway to be consolidating into a single unit, without compromising performance capacity or speed. It also means that ageing or obsolete PLCs can be upgraded to a controller such as the CPL410 during any modernisation initiatives, minimising additional capital costs.

Although the idea behind the IoT is not a new one for industrial businesses, the fact that other sectors are embracing the idea means more external data points than ever before are available. With systems in place that can support effective inner and outer loops, industrial businesses can leverage the increased connectivity of external markets and enhance their own operations.

]]>
Free whitepaper: Enhancing data management in utilities https://ideashub.novotek.com/free-whitepaper-enhancing-data-management-in-utilities/ Fri, 20 Aug 2021 10:30:00 +0000 https://ideashub.novotek.com/?p=2748 Innovation has been one of the biggest focuses for utilities operators in recent years, particularly in the water market due to pressures from regulatory bodies. However, innovation is a broad term that offers no indication of the best and most impactful changes to implement.

The best approach may be to let the data dictate where to focus your innovation efforts. Or, if there’s a lack of useful data, then that itself may be the answer.

In this whitepaper, Novotek UK and Ireland explains how utilities operators can get to grips with data management to create an effective data-driven approach to innovation. Covering how to consolidate and modernise assets for data collection, how to make sense of utilities data and which method to use to get the most long-term value from data, the whitepaper is an invaluable resource for utilities operations managers and engineers.

Complete the form below to receive a copy of the whitepaper.

Subscribe to receive the Enhancing data management in utilities whitepaper

* indicates required
]]>
Enabling inner and outer loops https://ideashub.novotek.com/enabling-inner-and-outer-loops/ Fri, 12 Feb 2021 12:26:11 +0000 http://ideashub.novotek.com/?p=2279 Process control has been a staple of industrial environments for decades, making acute adjustment of variable processes — motor speeds, actuator positioning and much more — achievable. As the potential complexity of control algorithms increases, more variables can be considered in processes than ever before. Not all of these will be internal factors; some important signals can come from external sources. This is where establishing communication between inner and outer loops proves invaluable.

Industry has not been the same since the first programmable logic controller (PLC) was unveiled in 1968. Marking a move away from binary relay-based systems, the PLC allowed more complex computations possible to control connected machines. Since the 1970s, the use of PLCs has become common across all aspects of industry, with the underlying technology becoming more impressive as Moore’s Law accelerated processing capabilities each year.

PLCs becoming the standard also meant that most technicians and engineers became accustomed to how to these systems should be deployed and programmed. It also gave rise to a mindset that exists to this day of pushing all control into PLC environments, because they contain all the logic needed to support fast, reliable and acute process control.

The prevailing setup for a PLC has typically been to see, think and do: signals from machines and devices show what is happening, the PLC’s logic computes what changes may need to occur and these adjustments are then enacted.

For example, if a water tank is nearly at full volume, sensor data feeds that information to a PLC. The PLC then sends the signal for the valve to close once a certain threshold of volume is met, avoiding overflow. It is a simple, inner control loop.

However, the digitalisation of the world around us has meant that there are now more external variables than ever before that can be accounted for to optimise operations. Let’s take, for example, the current price of power from the electrical grid. This is something that varies throughout a day depending on a number of factors. If an industrial business has its own on-site generation — whether from anaerobic digestion (AD) processes in the food industry or steam recovery in a process — then it may be cheaper to switch the source of energy if the grid price exceeds a certain threshold. At other times of the day, grid electricity might be more economical.

Although this is an external signal that may have been given lower priority in the past, the relative ease of integrating into modern operations makes it useful to consider. This is particularly true given the energy intensity of industrial operations; in the first quarter of 2020, industrial operations accounted for almost one-third (29.5 per cent) of the UK’s total energy consumption, using 23.1 TeraWatt hours (TWh). Offsetting this at higher cost hours where possible will of course reduce overall operating costs.

Barriers to outer loops

The problem is, accounting for external signals is not particularly easy or effective in traditional PLC setups. Because of how many PLCs have historically been deployed, this would require technicians to manually check the external source — in this case, cost of energy — and manually reprogram any PLCs responsible for controlling energy supply. This might need to be done several times per day for particularly dynamic energy markets.

It is far easier to make use of an algorithm in an outer loop that feeds external data into the control system. Communication with an outer loop allows for more than input from single external reference point though. It could be that there is some particularly complex analysis that you could run on asset generated data, which would require a more powerful platform than a PLC. In this case, the outer loop might include pushing data to this external platform, which could run analysis and send back an adjustment or series of adjustments that the PLC can enact.

An example might be that a system can account for local device performance data from the inner loop and an accurate weather and temperature forecast from the outer loop. A sufficiently sophisticated external system can analyse these inputs and recommend a series of incremental changes for the PLC to make ahead of the highest or lowest temperature expected, allowing equipment to function effectively within temperature parameters. Effectively, the external loop can run more adaptive and powerful decisions for control.

Even with this loop established, there is a performance challenge in most PLCs. The traditional PLC is not designed to handle external sources too well. Even though modern PLCs boast vastly superior technical specifications to those that came before, their operating systems (OSs) and the way they are designed to operate means they are not ideal at being an application host for anything other than a real time OS and some control programmes.

The solution to this is to use hardware as a gateway between the inner and outer loops. This doesn’t need to be an intrusive additional device to act as a bridge; it can be a controller designed in such a way that it breaks some of the conventional ‘rules’ applied to PLCs.

The prime example is Emerson Automation’s CPL410, which features multi-core processing that can be deployed to provide the inner loop control alongside secure communication with external devices. One core can run the standard PLC processes, while other cores can be used for other things. The second core or group of cores could be used to run application environments, such as Linux OS and containerised apps. Another cluster of cores could be used as a broker between the control system in a safe and secure way.

This physical device can bridge the two loops, maintaining integrity of the inner loop while supporting communication with the outer loop. The device allows a small data footprint to run complex logic and analytics, and because it is not burdening the core process control with that additional processing, the inner loop can still operate at high speed and with high fidelity.

Effectively enabling inner and outer loops requires engineers and technicians to change their mindset from the traditional way of deploying PLCs. It also shows the importance of hardware for data communication in the evolving edge, even though many people consider that to be primarily dependent on software and networks. It will take some time to break through the established ways of working that have taken hold since the PLCs conception in 1968, but those that do will reap the benefits.

]]>
Free whitepaper: IoT ready by 2030 https://ideashub.novotek.com/free-whitepaper-iot-ready-by-2030/ Tue, 22 Dec 2020 15:08:21 +0000 http://ideashub.novotek.com/?p=1592 Many countries around the world are introducing initiatives aiming to achieve industrial digitalisation in the next 10–15 years. However, the technology is already available and businesses can begin digitalising operations by 2030. The technologies available, and the value in using them, are outlined in this industrial internet of things (IIoT) whitepaper, which is free to download using the form below.

Enter your details for a free copy of the industrial automation whitepaper

* indicates required
]]>