Evolving edge insights – Novotek Ideas Hub https://ideashub.novotek.com Ideas Hub Fri, 28 Oct 2022 13:59:54 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.11 https://ideashub.novotek.com/wp-content/uploads/2021/03/Novotek-logo-thumb-150x150.png Evolving edge insights – Novotek Ideas Hub https://ideashub.novotek.com 32 32 What SCADA Evolution Means for Developers https://ideashub.novotek.com/what-scada-evolution-means-for-developers/ Fri, 28 Oct 2022 13:58:37 +0000 https://ideashub.novotek.com/?p=3296

If you’ve walked through factories and seen operator or supervisor screens like the one below, you’re actually seeing both the best and worst aspects of technology evolution! Clearly, no data is left hidden within the machine or process, but screen design looks to have been driven by the abililty to visualiase what’s available from the underlying controls, rather than a more nuanced view of how to support different people in their work. you could say that the adoption of modern design approaches to building a “good” HMI or SCADA application has lagged what the underlying tools can support.

One place to configure & manage for SCADA, Historian, Visualisation

In Proficy iFIX, GE Digital has incorporated a mix of development acceleration and design philosophies that can both lead to more effective user experiences with a deployed system, while also making the overall cost of building, maintaining, and adapting a SCADA lower.

Three critical elemetns stand out:

1. Model-centric design

This brings object-oriented developement principles to SCADA and related applications. With a “home” for standrad definitions of common assets, and their related descriptibe and attribute data, OT teams can create reusable application components that are quick to deploy for each physical instance of a type. The model also provides useful application foundations, so things like animations, alarm filters and so on can be defined as appropriate for a class or type – and thereofore easily rolled out into the screens where instances of each type are present. And with developments in the GE site making the model infrastructure available to Historain, analytics and MED solutions, work done once can defray the cost and effort needed in related programs.

2. Centralised, web-based administation and development

In combination with the modelling capability, this offers a big gain in productivity for teams managing multiple instances of SCADA. With common object definitions, and standard screen templates, the speed at which new capabilites or chages to exisiting footprints can be built, tested, and rolled out means a huge recovery of time for skilled personnel.

3. The subtle side of web-based clients

Many older application have large bases of custom scripting – in many cases to enable interaction with data sources outside the SCADA, drive non-standard animations, or to enable conditional logic. With the shift to web-based client technology, the mechanics for such functions are shifting to more configurable object behaviours, and to server-side functions for data integrations. These mean simipler, more maintainable, and less error prone deployments.

Taking advantage of what current-generation iFIX offers will mean a different development approach – considering useful asset and object model structure, then templating the way objects should be deployed is a new starting point for many. But with that groundwork laid, the speed to a final solution is in many (most!) cases, faster than older methodologies – and that’s beofer considering the advantage of resuability across asset types, or across multiple servers for different lines or sites.

Recovered time buys room for other changes

With rich automation data mapped to the model, and faster methods to build and roll out screen, different users can have their views tailored to suit their regualr work. Our earlier screen example reflected a common belief that screen design is time-consuming, so best to put as much data as possible in one place so that operators, technicicans, maintenance and even improvement teams can all get what they need without excessive development effort. But that can mean a confused mashup of items that get in the way of managing the core process, and in turn actually hamper investigations when things are going wrong.

But where development time is less of a constraint, more streamlined views can be deployed to support core work processes, with increasing levels of detail exposed on other screen for more technical investigation or troubleshooting. Even without fully adopting GE Digital’s Efficient HMI design guidelines, firms can expect faster and more effective responses form operators and supervisors who don’t have to sift through complex, overloaded views simplu to maintain steady-state operators.

With significant gains to be had in terms of operator responsiveness, and effective management of expectations, the user experience itself can merit as much consideration as the under-the-bonent changes that benefit developers.

Greenfield vs. Brownfield

It may seem like adopting a model-based approach, and taking first steps with the new development environments would be easier on fresh new project, whereas an upgrade scenario should be addressed by “simply” porting forward old screens, the database, etc. But when you consider all that can be involved in that forward migration, the mix of things that need “just a few tweaks” can mean as much – or more – work than a fresh build of the system, where the old serves as a point of reference for design and user requirements.

The proess database is usually the easiest part of the configuration to migrate forward. Even if changing from legacy drivers to IGS or Kepware, these tend to be pretty quick. Most of the tradeoffs of time/budget for an overall better solution are related to screen (and related scripting) upgrades. From many (many!) upgrades we’ve observed our customers make, we see common areas where a “modernisation” rather than a migration can actully be more cost effective, as well as leaving users with a more satisfying solution.

Questions to consider include:

While there is often concen about whether modernisation can be “too much” change, it’s equally true that operators genuinely want to support their compaines in getting better. So if what they see at the end of an investment looks and feels the same way it always has, the chance to enable improvements may have been lost – and with it a chance to engage and energise employees who want to be a part of making things better.

Old vs. New

iFIX 2023 and the broader Proficy suite incorporating more modern tools, which in turn offer choices about methods and approahces. Beyond the technical enablement, enginerring and IT teams may find that exploring these ideas may offer benefit in areas as straightforward as modernising system to avoid obsolescene risk to making tangile progress on IoT and borader digital initiatives.

]]>
https://ideashub.novotek.com/3290-2/ Mon, 24 Oct 2022 09:46:04 +0000 https://ideashub.novotek.com/?p=3290 One of the advantages of managing technology assets is that you can do things with them beyond “just running them”, such as, keeping track of them and repairing them! Optimising a productions process for efficiency or utility usage if often a matter of enhancing the code in a control program, SCADA, or related system, so the tech assets themselves can be the foundation for ongoing gains. And similarly, as customer or regulatory requirements for proof of security or insight into production processes increase, the tech assets again become the vehicle to satisfy new demands, rather than re-engineering the underlying mechanical or process equipment.

It’s this very adaptability that makes version control around the configurations and programs valuable. As configurations and programs change, being sure that the correct version are running is key to sustaining the improvements that have been built into those latest releases. With that in mind, a good technology asset management program, such as octoplant will have version control as a central concern.

Whether deploying solutions in this area for the first time, or refreshing an established set of practices, it’s worthwhile to step back and evaluate what you want version control to do for you – operationally, compliance-wise and so on. And from that, the capabilities needed from any tools deployed will become clearer. With that in mind, we’ve noted some of the key areas to consider, and the decision that can come from them. We hope this helps you set the stage for a successful project!

Decide How to Deeply Embed Version Control

 We take VPNS, remote access and web applications for granted in a lot of ways – but this combination of technology means that it’s easier than ever to incorporate external development and engineering teams into your asset management and version control schemes. Evaluate whether it makes sense to set up external parties as users of your systems, or if it makes more sense to have your personnel manage the release and return of program / configuration files. The former approach can be most efficient in terms of project work, but it may mean some coordination with IT, to ensure access is granted securely. Either way, setting your version control system to reflect when a program is under development by other can ensure you have a smooth process for reincorporating their work back into your operation.

Be Flexible About the Scope of What Should be Version-Controlled.

Program source codes and configurations are the default focus of solutions like octoplant. Yet we see many firms deploying version control around supporting technical documentation, diagrams, even SOP (Standard Operating Procedure) documents relating to how things like code troubleshooting and changed should be done.

Define Your Storage and Navigation Philosophy. 

In many cases, this can be a very easy decision – set up a model (and associated file storage structure) that reflects your enterprise’s physical reality, as illustrated below. This works especially well when deploying automated backup and compare-to-master regimens, as each individual asset is reflected in the model.

However, some types of business may find alternatives useful. If you have many instances of an asset where the code base is genuinely identical between assets, and changes are rolled out en masse, and automated backup and compare is not to be deployed, it can make sense to think of a category-based or asset-type-specific model and storage scheme.

It may be that a blended approach make sense – where non-critical assets and programs may have variance both in their automation, and therefore in the program structure, an enterprise model can make sense. But in some industries (food, pharma, CPG), it can be common to maintain identical core asset types, and associated automation and process control. So having some category / type-based manager versions can be useful, too.

Reporting and Dashboards – Version Control Data is Not Just for Developers and Engineers.

A robust solution will track actions taken by different users in relation to different asset’s code bases, and in relation to any automated comparisons. This means you can have a rich audit trial that can certainly be used to ensure disciplines are being followed, but it also means that you can easily support any regulatory or customer requirements for data. And with a model of your operation reflecting the different makes models, variants and generation of tech assets, you’ll have a tech inventory at your fingertips that can make reinvestment and replacement planning much more efficient. So, make sure your plan to share dashboards and reports reflects the different people in your organisation who could use their own view of the tech assets, and the programs running in them.

If you’d like to learn more about the work we do with our customers on technology asset management, you can get in touch here; or ring us on +44 113 531 2400

]]>
Managing multiple energy sources https://ideashub.novotek.com/managing-multiple-energy-sources/ Tue, 18 Oct 2022 12:51:20 +0000 https://ideashub.novotek.com/?p=3270

In 2013, the UK Government Office for Science produced a report, entitled the Future role for energy in manufacturing. In this report, they identified two threats to UK-based manufacturers. The first was that the price of energy in the UK will rise, compared to the cost faced by competitor firms abroad, placing UK manufacturers at a significant disadvantage. Well, the price has risen but globally because of the Russia Ukraine war. Nevertheless, the threat to UK manufacturing is still valid. The second threat is that a low-carbon electricity supply will be unreliable, and that the cost of power cuts will rise. Well, that is certainly true if you rely solely on low-carbon electricity. But using multiple sources of power can be greatly beneficial.

In 2021, US rankings put technology companies at the top of their list for renewables users. Google derives 93% of its total electricity consumption from solar and wind power. Microsoft accounted for 100% of its electricity use from wind, small hydro and solar power, while Intel also derived 100% of its electricity from various renewables.

In the manufacturing world, more and more producers are turning to multiple sources to power their manufacturing, particularly those that are in the energy intensive production industries.

Tesla is well known for committing to renewable energy in manufacturing, with its solar-panelled roofs and use of waste heat and cold desert air to govern production processes in its Gigafactories.

Some of the bigger names in the manufacturing world that are utilising a solar system include GM, L’Oreal and Johnson & Johnson.

Manufacturing companies make ideal spots for solar system installations for several reasons. First, these businesses typically operate out of large plants with sizeable roofs. These expansive, flat spaces are perfect for setting up many solar panels. Also, manufacturing plants tend to be located in industrial parks and other areas far away from tall buildings, so they avoid the problems caused by massive structures looming over solar panels and creating shade. And smaller manufacturers can also benefit from multiple energy sources to both reduce their costs and reliance on the grid.

Making it work

The process of combining various types of energy is called a multi-carrier energy system, which increases energy efficiency. The technology that allows two or more independent three-phase or single-phase power system to synchronise can be achieve using a Power Sync and Measurement (PSM) system, such as the module found in the PACSystem RX3i Power Sync & Measurement Systems (IC694PSM001 & IC694ACC200). This will monitor two independent three-phase power grids. It incorporates advanced digital signal processor (DSP) technology to continuously process three voltage inputs and four current inputs for each grid.

Measurements include RMS voltages, RMS currents, RMS power, frequency, and phase relationship between the phase voltages of both grids.

The PSM module performs calculations on each captured waveform, with the DSP processing the data in less than two-thirds of a power line cycle. The PSM module can be used with wye or delta type three-phase power systems or with single-phase power systems.

There are unquestionably many cases where a plant-wide solution like an MES is necessary or even preferable. We and our key technology and services partners have delivered many such “complete” systems across the country. However, it should certainly not be considered the only option for agile industrial businesses. If each factory can be thought of as a collection of work processes/functions that need to be delivered, then implementing the supporting/enabling technology as a collection of micro-apps can make sense. And when balancing risk, cost and speed to value, sometimes, moderation in plant technology deployments can provide the most bountiful benefits.

The PSM system can be used for applications such as:

  • Electrical power consumption monitoring and reporting
  • Fault monitoring
  • Generator control features for generator to power grid synchronization
  • Demand penalty cost reduction/load shedding

The PSM system consists of:

  • PSM module – A standard IC694 module that mounts in an RX3i main rack. The PSM module provides the DSP capability.
  • Terminal Assembly – A panel-mounted unit that provides the interface between the PSM module and the input transformers.
  • Interface cables – Provide the GRID 1 and GRID 2 connections between the PSM module and the Terminal Assembly

The image below shows how a basic PSM system can be connected.

PSM System Features
  • Uses standard, user-supplied current transformers (CTs) and potential transformers (PTs) as its input devices.
  • Accurately measures RMS voltage and current, power, power factor, frequency, energy, and total three-phase 15-minute power demand.
  • Provides two isolated relays that close when the voltage phase relationships between the two monitored grids are within the specified ANSI 25 limits provided by the RX3i host controller. These contacts can be used for general-purpose, lamp duty or pilot duty loads. Voltage and current ratings for these load types are provided in GFK-2749, PACSystems RX3i Power Sync and Measurement System User’s Manual.
  • Provides a cable monitoring function that indicates when the cables linking the PSM module and Terminal Assembly are correctly installed.
  • PSM module and Terminal Assembly are easily calibrated by hardware configuration using the PAC Machine Edition (PME) software.

To find out how Novotek can help you reduce your energy consumption and manage multiple energy sources email us at info_uk@novotek.com

]]>
DataOps: The Fuel Injectors For Your Transformation Engine? https://ideashub.novotek.com/dataops-the-fuel-injectors-your-transformation-engine/ Thu, 19 May 2022 11:43:48 +0000 https://ideashub.novotek.com/?p=3060

Data – everyone agrees it’s the fuel for the fires of innovation and optimisation. The industrial world is blessed with an abundance of rich, objective (being machine-generated) data, so should be well-equipped to seek new advantages from it. Too often, the first efforts an industrial firm takes to harness its machine and process data for new reporting or advanced analysis initiatives involve simple use cases and outputs that can mask what it takes to support a mix of different needs in a scalable and supportable way. Data Ops practices provide a way of systemically addressing the steps needed to ensure that your data can be made available in the right places, at the right times, in the right formats for all the initiatives you’re pursuing.


Industrial data (or OT data) poses particular challenges that your Data Ops strategy will address:

  • It can be generated at a pace that challenges traditional enterprise (or even cloud layer) data collection and data management systems (TO say nothing of the costs of ingestion and processing during reporting/analysis typical of cloud platforms is considered).
  • The data for functionality identical assets or processes is often not generated in a consistent structure and schema.
  • OT data generally does not have context established around each data point – making it difficult to understand what it represents, let alone the meaning inherent in the actual values!
  • Connecting to a mix of asset types with different automation types and communications protocols is often necessary to get a complete data set relevant to the reporting or analytics you’re pursuing.
  • A wide array of uses demands different levels of granularity of some data points and a breadth of collection points that is significantly wider than many individual stakeholders may appreciate.

These are the reasons why in many firms, the engineering team often ends up becoming the “data extract/Excel team” – their familiarity with the underlying technology means they can take snapshots and do the data cleansing necessary to make the data useful. But that’s not scalable, and data massaging is a far less impactful use of their time – they should be engaged with the broader team interpreting and acting on the data!


Data Ops – Quick Definition There’s no one way to “do” Data Ops. In the industrial world, it’s best thought of as a process involving: – Determining the preferred structures and descriptions (models) for OT data, so it may serve the uses the organisation has determined will be valuable. – Assessing what approaches to adding such models can be adopted by your organisation. – Choosing the mix of tools needed to add model structures to a combination of existing and new data sources. – Establishing the procedure to ensure that model definitions don’t become “stale” if business needs change. – Establishing the procedures to ensure that new data sources, or changing data sources are brought into the model-based framework promptly.


A Rough Map is Better Than No Map.

Take a first pass at capturing all the intended uses of your OT data. What KPIS, what reports, what integration points, and what analytics are people looking for? Flesh out those user interests with an understanding of what can feed into them:

  1. Map the different stakeholder’s data needs in terms of how much they come from common sources, and how many needs represent aggregations, calculations or other manipulations of the same raw data.
  2. Flesh out the map by defining the regularity with which data needs to flow to suit the different use cases. Are some uses based on by-shift, or daily views of some data? Are other users based on feeding data in real-time between systems to trigger events or actions?
  3. Now consider what data could usefully be “wrapped around” raw OT data to make it easier for the meaning and context of that data to be available for all. Assess what value can come from:
    1. Common descriptive models for assets and processes – a “Form Fill & Seal Machine” with variables like “Speed” and “Web Width” (etc.) is a far easier construct for many people to work with then a database presenting a collection of rows reflecting machines’ logical addresses with a small library of cryptically structured variables associated to each one.
    2. An enterprise model to help understand the locations and uses of assets and processes. The ISA-95 standard offers some useful guidance in establish such a model.
    3. Additional reference data to flesh out the descriptive and enterprise models. (eg: Things like make and model of common asset types with many vendors; or information about a location such as latitude or elevation). Be guided by what kind of additional data would be helpful in comparing/contrasting/investigating differences in outcomes that need to be addressed.
  4. Now assess what data pools are accumulating already – and how much context is accumulating in those pools. Can you re-use existing investments to support these new efforts, rather than creating a parallel set of solutions?
  5. Finally, inventory the OT in use where potentially useful data is generated, but not captured or stored; particularly note connectivity options.

Avoiding A Common Trap “Data for Analytics” means different things at different stages. A data scientist looking to extract new insights from OT data may need very large data sets in the data centre or cloud, where they can apply machine learning or other “big data” tools to a problem. A process optimisation team deploying a real-time analytic engine to make minute-by-minute use of the outputs of the data scientists’ work may only need small samples across a subset of data point for their part of the work. Data Ops thinking will help you ensure that both of these needs are met appropriately.


Map’s Done – Now How About Going Somewhere?

The work that comes next is really the “Ops” part of Data Ops – with the rough map of different uses of OT data at hand, and the view of whether each use needs granular data, aggregated data, calculated derivations (like KPIs), or some kind of combination, you’ll be able to quickly determine where generating desired outputs requires new data pools or streams, or where existing ones can be used. And for both, your data modelling work will guide what structures and descriptive data need to be incorporated.

At this point, you may find that some existing data pools lend themselves to having asset and descriptive models wrapped around the raw data at the data store level – ie: centrally. It’s a capability offered in data platform solutions like GE’s Proficy Historian. This approach can make more sense than extracting data sets simply to add model data and then re-writing the results to a fresh data store. Typically, streaming/real-time sources offer more choice in how best to handle adding the model around the raw data – and there are solutions like HighByte’s Intelligence Hub, that allow the model information to be added at the “edge” – the point where the data is captured in the first place. With the model definitions included at this point, you can set up multiple output streams – some feeding more in-the-moment views or integration points, some feeding data stores. In both cases, the model data having been imposed at the edge makes it easier for the ultimate user of the data to understand the context and the meaning of what’s in the stream.



Edge Tools vs Central Realistically, it’s you’re likely to need both. And the driving factor will not necessarily be technical. Edge works better when: 1. You have a team that deal well with spreading standardised templates. 2. Data sources are subject to less frequent change (utility assets are a good example of this). 3. The use cases require relatively straightforward “wrapping” of raw data with model information. 4. Central works well when. 5. The skills and disciplines to manage templates across many edge data collection footprints are scarce. 6. The mix of ultimate uses of the data are more complex – requiring more calculations or derivations or modelling of relationships between different types of data sources. 7. Change in underlying data sources is frequent enough that some level of dedicated and/or systematized change detection and remediation is needed.


Regardless of which tools are applied, the model definitions defined earlier, applied consistently, ensure that different reports, calculations and integration tools can be developed more easily, and adapted more easily as individual data sources under the models are inevitably tweaked, upgraded or replaced – as the new automation or sensors come in, their unique data structures simply need to be bound to the models representing them, and the “consumer” of their outputs will continue to work. So, while tools will be needed, ultimately the most valuable part of “doing” Data Ops is the thinking that goes into deciding what needs to be wrapped around raw data for it to become the fuel for your digital journey.

]]>
1,000 miles or around the block: Start one step at a time… https://ideashub.novotek.com/1000-miles-or-around-the-block-start-one-step-at-a-time/ Wed, 16 Mar 2022 11:44:08 +0000 https://ideashub.novotek.com/?p=2996

The rise of connected industrial technologies and Industry 4.0 has prompted the development and launch of countless systems with extensive capabilities and functions. This is often beneficial for businesses with a defined and set long-term strategy, but it can lead to forcing early adoption and spending when deployments and licensing outstrip company’s capacity to change work processes ad adopt new tech.


Here, Sean Robinson, software solutions manager at Novotek UK and Ireland, explains how less can be more with new plant tech deployments – and why immediate problem-solving needs to be a distinct effort within longer-term strategies.


Countless proverbs, maxims and quotes have been formed around the idea of moderation, dating back as far as – or even further – Ancient Greek society. The notion remains important to this day for everything from diet to technology. However, engineers and plant managers frequently over-indulge in the latter and over-specify systems that offer functionality well beyond what is necessary or even practically useful.

It can initially appear that there is no harm in opting for an automation or plant IT system that has extensive functionality, because this may help to solve future problems as they arise. That being said, and investment, positioned to be all-encompassing, like a full, material-receiving-throguh-WIP-execution-with-performances-analysis-and-enterprise-integration manufacturing execution system (MES) can sometimes present its own barriers to adoption for certain businesses, especially those in sectors that favour flexibility such as fast-moving consumer good (FMCG) or food manufacturing (also – interestingly – increasingly in the contract production side of consumer health and life sciences). Where core production processes and related enabling technology are well-established, it can be risky, expensive and overkill to treat the need to implement specific new capabilities as the trigger for wholesale replacement or re-working. They key is to identify where critical new functional needs can be implemented around the installed technology base in focused ways that deliver results, while leaving open the option of incrementally adding additional functionally-focused solutions in a staged way, over time.

At Novotek, our role is to help our customers choose technology that delivers on an immediate need, while opening up the potential to build incrementally in a controlled, low-risk way.

Fortunately, both the licensing models and the technical architectures of plant IT solutions are changing in ways that support this kind of approach. So, the software cost and deployment services costs of bringing on board very specific capabilities can be scaled to match the user base, and the technical and functional boundaries of a specific need. We can think of these focused deployments as “micro-apps”. A key part of this approach is that the apps aren’t built as bespoke, or as an extension of a legacy (and possibly obsolete) system. It’s a productised solution – with only the “right” parts enables and delivered to the right stakeholders. Consider quality in toiletry production and specifically challenges with product loss due to variability in the quality of raw materials. It’s safe to assume that a plant will already have local control systems in place elsewhere to track the overall quality outcomes, but monitoring the raw material quality is often left to supplier-side data that may be under used – serving as a record of supplier compliance with a standard, rather then being used to proactively trigger adjustments in key process settings to avoid losses. In this scenario, an ideal micro-app could be focused on captured raw material data, using machine learning to provide deep analysis of how existing machines can best process the material lot and alerting supervisors and process owners to take action. Such a function might have a small number of users; it might even have integration with inventory or quality systems replacing some manual data entry. So, the software licensing and services and timelines to deliver impact can be kept small. When we consider some of the demands manufacturers now face on fronts ranging from qualifying new supplier/materials, to furthering energy and water reduction, to adopting more predictive maintenance and asset management strategies. We see a lot of potential to tackle these with focused solutions that happen to borrow from the underlying depth and breath of MES solutions.

There are unquestionably many cases where a plant-wide solution like an MES is necessary or even preferable. We and our key technology and services partners have delivered many such “complete” systems across the country. However, it should certainly not be considered the only option for agile industrial businesses. If each factory can be thought of as a collection of work processes/functions that need to be delivered, then implementing the supporting/enabling technology as a collection of micro-apps can make sense. And when balancing risk, cost and speed to value, sometimes, moderation in plant technology deployments can provide the most bountiful benefits.

]]>
Are your PLCs an easy target? A mindset shift can significantly reduce PLC firmware vulnerabilities https://ideashub.novotek.com/are-your-plcs-an-easy-target-reduce-plc-firmware-vulnerabilities/ Thu, 25 Nov 2021 14:06:48 +0000 https://ideashub.novotek.com/?p=2917

Since the beginning of the COVID-19 pandemic, businesses across the UK have faced a surge in cybercrime. In fact, research indicates that UK businesses experienced one attempted cyberattack every 46 seconds on average in 2020. Industrial businesses are a prime target for hackers and the ramifications of a data breach or denial-of-service attack are far-reaching, making system security imperative. Here, David Evanson, corporate vendor relationship manager at Novotek UK and Ireland, explains how industrial businesses can keep their vital systems secure.

For many business leaders and engineers, it is still tempting to consider large multinational companies or data-rich digital service providers to be the prime target for hackers. However, the growing volume of cyberattacks on businesses globally show that any company can be a target of malicious attacks on systems and services.

According to research by internet service provider Beaming, there were 686,961 attempted system breaches among UK businesses in 2020, marking a 20 per cent increase on 2019. Of these attacks, Beaming noted that one in ten intended to gain control of an Internet of Things (IoT) device — something that indicates a tendency to target system continuity rather than conventional data.

Both factors together are cause for alarm among industrial businesses of all sizes. Hackers are targeting all manner of companies, from start-ups to global organisations, and focussing more on the growing number of internet-connected devices and systems that were previously isolated.

The consequences of a device being compromised range from data extraction to service shutdown, and in any case the financial and production impacts to an industrial business are significant. There is no single quick fix to bolster cybersecurity due to the varying types of hacks that can take place. Some cyberattacks are complex and sophisticated; others less so. Many attacks on devices tend to fall into the latter category, which means there are some steps industrial businesses can take to minimise risk.

Novotek has been working closely with industrial businesses in the UK and Ireland for decades. One common thing that we have observed with automation hardware and software is that many engineers do not regularly upgrade software or firmware. Instead, there is a tendency to view automation as a one-off, fit-and-forget purchase. The hardware may be physically maintained on a regular schedule, but the invisible software aspect is often neglected.

GE Fanuc Series 90-30

Older firmware is more susceptible to hacks because it often contains unpatched known security vulnerabilities, such as weak authentication algorithms, obsolete encryption technologies or backdoors for unauthorised access. For a programmable logic controller (PLC), older firmware versions make it possible for cyber attackers to change the module state to halt-mode, resulting in a denial-of-service that stops production or prevents critical processes from running.

PLC manufacturers routinely update firmware to ensure it is robust and secure in the face of the changing cyber landscape, but there is not always a set interval between these updates.

In some cases, updates are released in the days or weeks following the discovery of a vulnerability — either by the manufacturer, Whitehat hackers or genuine attackers — to minimise end-user risk. The firmware version’s upgrade information should outline any exploits that have been fixed.

However, it’s important to note that legacy PLCs may no longer receive firmware updates from the manufacturer if the system has reached obsolescence. Many engineers opt to air-gap older PLCs to minimise the cybersecurity risk, but the lack of firmware support can also create interoperability issues with connected devices. Another part of the network, such as a switch, receiving an update can cause communications and compatibility issues with PLCs running on older versions — yet another reason why systems should run on the most recent software patches.

At this stage, engineers should invest in a more modern PLC to minimise risk — and, due to the rate of advancement of PLCs in recent years, likely benefit from greater functionality at the same time.

Firmware vulnerabilities are unavoidable, regardless of the quality of the PLC. At Novotek, we give extensive support for the Emerson PACSystems products that we provide to businesses in the UK and Ireland. This involves not only support with firmware updates as they become available, but also guidance on wider system resilience to ensure that businesses are as safe as possible from hardware vulnerabilities. The growth in cyberattacks will continue long beyond the end of the COVID-19 pandemic, and infrastructure and automation are increasingly becoming targets. It may seem a simple step, but taking the same upgrade approach to firmware that we do with conventional computers can help engineers to secure their operations and keep running systems safely.

]]>
Bridging the connectivity gap https://ideashub.novotek.com/bridging-the-connectivity-gap/ Mon, 06 Sep 2021 10:18:03 +0000 https://ideashub.novotek.com/?p=2860

In the age of connectivity, there is no shortage of useful information that engineers can leverage to optimise and improve operations. Everything from the speed of motors to the weather forecast can influence production. However, bringing these data sources together in a secure way is a challenge faced by many engineers. Here, George Walker, managing director of Novotek UK and Ireland, explains how engineers can bridge the gap between local process data and external data sources.

The Internet of Things (IoT) may still be a relatively new concept for many consumers and professional service businesses, but the idea of machine-to-machine communication and connectivity is nothing new for industry. In fact, it’s been more than 50 years since the programmable logic controller (PLC) first became popular among industrial businesses as a means of controlling connected systems.

The principle behind the PLC is quite simple: see, think and do. The controller will ‘see’ what is happening in a process based on the input data from the connected devices and machines. The PLC then processes this input and computes if any adjustments are required and if so, it signals these commands to the field devices. Traditionally, the field devices that could be controlled was limited, but recent developments in sensor technology have made specific components and resources much more measurable.

For example, if a water tank is almost at full capacity in a food processing plant, data from connected sensors can feed that information to a PLC. The PLC then sends the signal for the valve to close once the water volume exceeds a certain threshold, which prevents overflow. This is a simple control loop that sufficiently meets the need of the process.

Unfortunately, even as edge computing and PLC technology has advanced and offered more sophisticated data processing and control at the field-level, many plant engineers continue to setup their devices in this way. In reality, modern edge devices and industrial PCs (IPCs) are capable of providing much greater control, as well as responding to external commands or variables that were previously beyond the scope of control systems.

The outer loop

While the idea of the Industrial IoT (IIoT) is predominately a means of branding modern connectivity, the wider Industry 4.0 movement has brought with it some valuable advancements in edge and PLC technology. Among these advancements is the potential for on-premises automation and control systems to not only connect with local devices in an inner loop, but to draw from external sources: an outer loop.

The outer loop can take several forms, depending on what is most applicable or relevant to a process or operation.

For example, some more digitally mature businesses might have outer loops that feature an enterprise resource planning (ERP) system, supply chain management software or a wider manufacturing execution system (MES). These systems will share and receive relevant information or send required adjustments — such as due to raw material intake or low stock — to an edge device, which feeds into the inner loop. This allows industrial businesses to make use of more comprehensive data analysis than can be achieved in local data systems.

Alternatively, an outer loop could draw from data sources that are completely external to a plant’s operations. For example, a wind farm operator could use an outer loop that drew from sources of meteorological data for wind forecasts. This could inform the optimum pitch and yaw of a turbine, controlled by a field device.

Another example, and one that will resonate with many industrial businesses, is energy price. The cost of power from the electrical grid fluctuates throughout the day, which might mean that on-site generation — such as solar panels or heat recovery processes — become more economical during times of peak grid demand. An outer loop can communicate this data efficiently to the relevant systems in a business, and changes can then be enacted that allow the business to reduce energy costs.

Establishing secure connection

Clearly, there is a benefit for industrial businesses to establish both inner and outer loops. However, there is one barrier to deployment that most engineers encounter: hardware limitations.

Traditional PLCs were designed in a rather utilitarian manner to complete control functions effectively and in a straightforward manner. This no-frills approach persists even with modern PLCs — even with today’s technical specifications, most PLCs are not designed in a way that struggles to handle much more than a real-time operating system and some control applications.

Attempting to set up such a PLC to interact with an outer loop would either not work at all or severely hinder performance and risk failure.

Engineers can tackle this problem by introducing a separate gateway device that serves as an intermediary between the outer loop and the inner loop. However, this is a somewhat inelegant solution that requires investment in additional devices, which will require ongoing maintenance and introduce yet another device into already large system networks. Across an entire site, this quickly becomes costly and complicates network topologies.

A better solution is an unconventional one. It is possible to set up a modern automation controller in such a way that it breaks the conventions of PLCs, as long as the device is capable of multi-core processing at pace. From Novotek’s perspective, one of the best modern units that meet this need is Emerson Automation’s CPL410 automation controller.

The CPL410 can split inner and outer loop processing between its multiple processor cores. The inner loop and PLC processes can run from a single core, while another core — or even a group of cores, depending on complexity — can run more sophisticated containerised applications or operating systems. Additional cores can broker between the inner and outer loops, ensuring reliability and security.

A multi-core setup is useful because it allows the PLC processes and gateway to be consolidating into a single unit, without compromising performance capacity or speed. It also means that ageing or obsolete PLCs can be upgraded to a controller such as the CPL410 during any modernisation initiatives, minimising additional capital costs.

Although the idea behind the IoT is not a new one for industrial businesses, the fact that other sectors are embracing the idea means more external data points than ever before are available. With systems in place that can support effective inner and outer loops, industrial businesses can leverage the increased connectivity of external markets and enhance their own operations.

]]>
Free whitepaper: Enhancing data management in utilities https://ideashub.novotek.com/free-whitepaper-enhancing-data-management-in-utilities/ Fri, 20 Aug 2021 10:30:00 +0000 https://ideashub.novotek.com/?p=2748 Innovation has been one of the biggest focuses for utilities operators in recent years, particularly in the water market due to pressures from regulatory bodies. However, innovation is a broad term that offers no indication of the best and most impactful changes to implement.

The best approach may be to let the data dictate where to focus your innovation efforts. Or, if there’s a lack of useful data, then that itself may be the answer.

In this whitepaper, Novotek UK and Ireland explains how utilities operators can get to grips with data management to create an effective data-driven approach to innovation. Covering how to consolidate and modernise assets for data collection, how to make sense of utilities data and which method to use to get the most long-term value from data, the whitepaper is an invaluable resource for utilities operations managers and engineers.

Complete the form below to receive a copy of the whitepaper.

Subscribe to receive the Enhancing data management in utilities whitepaper

* indicates required
]]>
Can your IPC handle the heat? https://ideashub.novotek.com/can-your-ipc-handle-the-heat/ Mon, 05 Jul 2021 10:55:00 +0000 https://ideashub.novotek.com/?p=2667 High operating temperatures are abundant in the industrial sector, whether it’s the elevated ambient temperature of oil and gas refining or the continuous operations with reduced airflow of heavy machinery. These high temperatures pose a common problem to the performance of industrial electronics, particularly industrial PCs (IPCs). Here, David Evanson, corporate vendor relationship manager at Novotek UK and Ireland, explains how engineers and managers can ensure their IPCs can handle the heat.

It’s no secret that IPCs play an essential role in modern industrial operations. These vital units undertake a range of tasks, from managing equipment performance data to motion and automated system control. It’s therefore no surprise that the IPC market continues to go from strength to strength. In fact, ResearchAndMarkets forecasts that the global IPC market will grow at a compound annual growth rate (CAGR) of 6.45 per cent, to be valued at $7.756 USD by 2026.

IPCs feature prominently on the factory floor, generally either in control cabinets or mounted onto machinery. Being on the frontline means that engineers and plant managers know that, as a minimum, they need to specify ruggedised IPCs for their operations. What sometimes gets overlooked, however, is the operating temperature range of an IPC unit.

Electronic circuits and computing components are highly susceptible to extreme temperatures, be they high or low. At high temperatures, components can deteriorate faster. In the case of IPCs, modern CPUs are designed to prevent accelerated component deterioration by throttling their processing performance. This succeeds in reducing the heat produced in processing circuits, but it means that processes running on the IPC run slowly or become unresponsive — not ideal for real-time control applications.

In certain markets, considering operating temperature range is second nature for engineers. For example, an IPC tasked with controlling or collecting data from a welding robot will be specified to withstand high temperatures.

However, temperature should be a consideration even in unassuming industrial environments. If an IPC is situated outside, the exposure to sunlight — alongside reduced airflow in an installation — can cause an increase in temperature that can reach up to 70 degrees Celsius. Both Novotek and its partner Emerson Automation have encountered industrial businesses that have experienced this problem.

Of course, the solution to the challenge of overheating in IPCs is to specify a unit that boasts good thermal performance in an extended operating temperature. Unfortunately, not all IPCs that claim to offer this feature are actually effectively tested in conditions that accurately reflect real-world operating conditions, which can lead to some IPCs failing when deployed in the field.

The reason why extended temperature IPCs fail is due to the way that the testing is undertaken. In many cases, the IPC is tested in a thermal chamber that has significant forced air flow conditions, which reduces the efficacy and the accuracy of the test itself. A more effective way of testing is for the IPC manufacturer to block the airflow, which simulates a realistic use condition in a cabinet environment.

Emerson Automation conducts its tests under these restricted airflow conditions, which allows it to accurately demonstrate that its IPCs can perform at high temperatures without throttling processing performance. The company has even shared a video of its IPC thermal testing process, highlighting the capabilities of its RXi2-BP.

It’s for this reason that Emerson’s IPCs are the go-to option from Novotek’s perspective, because they ensure reliable and consistent operation in demanding environmental conditions.

With IPCs playing such a vital role in modern industry, its important that they are up to the task not only in terms of computing capacity, but also environmental performance. When plant managers and engineers can specify an IPC with assurances of the accuracy of thermal testing, it provides peace of mind that the unit can handle the heat.

]]>
What evolving edge means to remote stations https://ideashub.novotek.com/what-evolving-edge-means-to-remote-stations/ Tue, 16 Feb 2021 14:59:00 +0000 https://ideashub.novotek.com/?p=2628 The utilities sector relies on remote assets and stations, whether they are pumping stations that keep water circulating in a network or electrical substations responsible for transforming power ready for supply into homes and offices. The difficulty for utilities operators has traditionally been managing these remote stations in an efficient and cost-effective way. Fortunately, developments in edge systems can help operators overcome this challenge.

Asset management is integral to utilities businesses. Assets must remain operational to ensure that customers receive a satisfactory and uninterrupted service, and regulatory bodies apply ever-increasing pressure on operators to maintain good qualities of service. Regulators often push operators to not only provide continuous supply to customers, but to do so efficiently while keeping costs controlled.

For example, in the water sector, the UK regulator Ofwat routinely publishes a price review that outlines a revised framework for operators. PR19, the latest review that came into effect on April 1, 2021, set operators the goal of reducing their bills by 12 per cent by 2025. This would amount to a £50 reduction in the average household annual water bill. Alongside this, the regulator is pushing operators to embrace innovation to improve their performances, with incentives and funding in place to encourage this.

It’s because of these pressures that the evolving edge is proving increasingly important for operators. The conventional approach to maintaining remote stations and assets would involve sending technicians to them, which produces labour costs and is time intensive. Introducing edge devices and control systems to these stations allows operators to remotely measure, monitor and control the performance of assets with increasing detail and accuracy.

With modern edge control systems in place, it’s possible to automate several of the process adjustments that keep remote stations functional. These systems, along with the collection and storage of performance data, makes it possible to manage and maintain remote assets in an efficient, strategic way.

However, the introduction of edge systems in utilities is not without its own challenges. One of the core ideas behind edge deployments is that asset and equipment data can be collected and pushed to an analytical platform, often a cloud-based system accessed by network managers elsewhere. Maintenance can then be managed more intelligently.

The sheer volume of data produced in these stations, which is pushed to cloud servers in real-time, leads to very high cloud storage costs due to the message-based charging structure of many cloud providers. Across an entire network of thousands of assets, the costs for operators are exceptionally high — making it harder to increase edge deployment while simultaneously reducing costs.

To overcome the issue of elevated costs from edge deployments, network managers need to use modern edge technology to open up options. If an edge system has the capacity to accumulate raw data, this makes it easier to send data to cloud storage in a larger, single instance.

Alternatively, more advanced edge systems — such as a Historian system — can aggregate data from several sources and perform some level of compression or analysis before sending to cloud storage. Similarly, it could be that some raw data does not need to be sent to the cloud, so the ability of a system to parse data at the edge can reduce cloud requirements.

Both options allow operators to get the full benefit of edge technologies in remote stations, without high cloud costs. Not only can modern edge control systems be accessed remotely by technicians or be set to automatically make certain process adjustments, but the data collection capabilities of edge computing can be managed in a cost-effective way.

As edge technology continues to evolve and offer greater computing capacity, operators will be able to manage remote stations and assets with increasing efficacy and efficiency, cost-effectively. With the utility service quality expectations from customers and regulators increasing year on year, the evolving edge offers an ideal solution to long-standing challenges.

]]>