Artificial Intelligence and Machine Learning: Enhancing Human Effort with Intelligent Systems

Artificial intelligence has come a long way since scientists first wondered if machines could think.

In the 20th century, the world became familiar with artificial intelligence (AI) as sci-fi robots that could think and act like humans. By the 1950s, British scientist and philosopher Alan Turing posed the question “Can machines think?” in his seminal work on computing machinery and intelligence, where he discussed creating machines that can think and make decisions the same way humans do (reference 1). Although Turing’s ideas set the stage for future AI research, his ideas were ridiculed at the time. It took several decades and an immense amount of work from mathematicians and scientists to develop the field of artificial intelligence, which is formally defined as “the understanding that machines can interpret, mine, and learn from external data in a way that imitates human cognitive practices” (reference 2).

Even though scientists were becoming more accustomed to the idea of AI, data accessibility and expensive computing power hindered its growth. Only when these challenges were mitigated after several “AI winters” (with limited advances in the field) did the AI field experience exponential growth. There are now more than a dozen types of AI being advanced (figure). 

Areas within AI (reference 3)

Due to the accelerated popularity of AI in the 2010s, venture capital funding flooded into a large number of startups focused on machine learning (ML). This technology centers on continuously learning algorithms that make decisions or identify patterns. For example, the YouTube algorithm may recommend less relevant videos at first, but over time it learns to recommend better-targeted videos based on the user’s previously watched videos.

The three main types of ML are supervised, unsupervised, and reinforcement learning. Supervised learning refers to an algorithm finding the relationship between a set of input variables and known labeled output variable(s), so it can make predictions about new input data. Unsupervised learning refers to the task of intelligently identifying patterns and categories from unlabeled data and organizing it in a way that makes it easier to discover insights. Lastly, reinforcement learning refers to intelligent agents that take actions in a defined environment based on a certain set of reward functions.

Deep learning, a subset of ML, had numerous ground-breaking advances throughout the 2010s. Similar to the connections between the nervous system cells in the brain, neural networks consist of several thousand to a million hidden nodes and connections. Each node acts as a mathematical function, which, when combined, can solve extremely complex problems like image classification, translation, and text generation.

Impact of artificial intelligence

Human lifestyle and productivity have drastically improved with the advances in artificial intelligence. Health care, for example, has seen immense AI adoption with robotic surgeries, vaccine development, genome sequencing, etc. (reference 5). So far, the adoption in manufacturing and agriculture has been slow, but these industries have immense untapped AI possibilities (reference 6). According to a recent article published by Deloitte, the manufacturing industry has high hopes for AI because the annual data generated in this industry is thought to be around 1,800 petabytes (reference 7). 

This proliferation in data, if properly managed, essentially acts as a “fuel” that drives advanced analytical solutions that can be used for the following (reference 8):

  • becoming more agile and disruptive by learning trends about customers and the industry ahead of competitors
  • saving costs through process automation
  • improving efficiency by identifying processes’ bottlenecks
  • enhancing customer experience by analyzing human behavior
  • making informed business decisions, such as targeted advertising and communication (reference 9).

Ultimately, AI and advanced analytics can augment humans to help mitigate repetitive and sometimes even dangerous tasks while increasing focus on endeavors that drive high value. AI is not a far-fetched concept; it is already here, and it is having a substantial impact in a wide range of industries. Finance, national security, health care, criminal justice, transportation, and smart cities are examples of this.

AI adoption has been steadily increasing. Companies are reporting 56 percent adoption in 2021, an uptick of 6 percent compared to 2020 (reference 10). With technology becoming more mainstream, the trends of achieving solutions that emphasize “explainability,” accessibility, data quality, and privacy are amplified.

“Explainability” drives trust. To keep up with the continuous demand for more accurate AI models, hard-to-explain (black-box) models are used. Not being able to explain these models makes it difficult to achieve user trust and to pinpoint problems (bias, parameters, etc.), which can result in unreliable models that are difficult to scale. Due to these concerns, the industry is adopting more explainable artificial intelligence (XAI). 

According to IBM, XAI is a set of processes and methods that allows human users to comprehend and trust the ML algorithm’s outputs (reference 11). Additionally, explainability can increase accountability and governance.

Increasing AI accessibility. The “productization” of cloud computing for ML has taken the large compute resources and models, once reserved only for big tech companies, and put them in the hands of individual consumers and smaller organizations. This drastic shift in accessibility has fueled further innovation in the field. Now, consumers and enterprises of all sizes can reap the benefits of:

  • pretrained models (GPT3, YOLO, CoCa [finetuned])
  • building models that are no-code/low-code solutions (Azure’s ML Studio)
  • serverless architecture (hosting company manages the server upkeep)
  • instantly spinning up more memory or compute power when needed 
  • improved elasticity and scalability.

Data mindset shift. Historically, model-centric ML development, i.e., “keeping the data fixed and iterating over the model and its parameters to improve performances” (reference 12), has been the typical approach. Unfortunately, the performance of a model is only as good as the data used to train it. Although there is no scarcity of data, high-performing models require accurate, properly labeled, and representative datasets. This concept has shifted the mindset from model-centric development toward data-centric development—“when you systematically change or enhance your datasets to improve the performance of the model” (reference 12).

An example of how to improve data quality is to create descriptive labeling guidelines to mitigate recall bias when using data labeling companies like AWS’ Mechanical Turk. Additionally, responsible AI frameworks should be in place to ensure data governance, security and privacy, fairness, and inclusiveness.

Data privacy through federated learning. The importance of data privacy has not only forged the path to new laws (e.g., GDPR and CCPA), but also new technologies. Federated learning enables ML models to be trained using decentralized datasets without exchanging the training data. Personal data remains in local sites, reducing the possibility of personal data breaches.

Additionally, the raw data does not need to be transferred, which helps make predictions in real-time. For example “Google uses federated learning to improve on-device machine learning models like ‘Hey Google’ in Google Assistant, which allows users to issue voice commands” (reference 13).

AI in smart factories

Maintenance, demand forecasting, and quality control are processes that can be optimized through the use of artificial intelligence. To achieve these use cases, data is ingested from smart interconnected devices and/or systems such as SCADA, MES, ERP, QMS, and CMMS. This data is brought into machine learning algorithms on the cloud or on the edge to deliver actionable insights. According to IoT Analytics (reference 14), the top AI applications are:

  • predictive maintenance (22.2 percent)
  • quality inspection and assurance (19.7 percent)
  • manufacturing process optimization (13 percent)
  • supply chain optimization (11.5 percent)
  • AI-driven cybersecurity and privacy (6.6 percent)
  • automated physical security (6.5 percent)
  • resource optimization (4.8 percent)
  • autonomous resource exploration (3.8 percent)
  • automated data management (2.9 percent)
  • AI-driven research and development (2.1 percent)
  • smart assistant (1.6 percent)
  • other (5.2 percent).

Vision-based AI systems and robotics have helped develop automated inspection solutions for machines. These automated systems have not only been proven to save human lives but have radically reduced inspection times. There have been significant examples where AI has outperformed humans, and it is a safe bet to conclude that several AI applications enable humans to make informed and quick decisions (reference 15). 

Given the myriad additional AI applications in manufacturing, we cannot cover them all. But a good example to delve deeper into is predictive maintenance because it has such a large effect on the industry. 

Generally, maintenance follows one of four approaches: reactive, or fix what is broken; planned, or scheduled maintenance activities; proactive, or defect elimination to improve performance; and predictive, which uses advanced analytics and sensing data to predict machine reliability.

Predictive maintenance can help flag anomalies, anticipate remaining useful life, and provide mitigations or maintenance (reference 17). Compared to the simple corrective or condition-based nature of the first three maintenance approaches, predictive maintenance is preventive and takes into account more complex, dynamic patterns. It can also adapt its predictions over time as the environment changes. Once accurate failure models are built, companies can build mathematical models to reduce costs and choose the best maintenance schedules based on production timelines, team bandwidth, replacement piece availability, and other factors.

Bombardier, an aircraft manufacturer, has adopted AI techniques to predict the demand for its aircraft parts based on input features (i.e., flight activity ) to optimize its inventory management (reference 18). 

This example and others show how advances in AI depend on advances associated with other Industry 4.0 technologies, including cloud and edge computing, advanced sensing and data gathering, and wired and wireless networking.

(Courtesy of International Society of Automation)

About The Authors:

Ines Mechkane is the AI Technical committee chair of ISA’s SMIIoT Division. She is also a senior technical consultant with IBM. She has a background in petroleum engineering and international experience in artificial intelligence, product management, and project management. Passionate about making a difference through AI, Mechkane takes pride in her ability to bridge the gap between the technical and business worlds. 

Manav Mehra is a data scientist with the Intelligent Connected Operations team at IBM Canada focusing on researching and developing machine learning models. He has a master’s degree in mathematics and computer science from the University of Waterloo, Canada, where he worked on a novel AI-based time-series challenge to prevent people from drowning in swimming pools.

Adissa Laurent is AI delivery lead within LGS, an IBM company. Her team maintains AI solutions running in production. For many years, Laurent has been building AI solutions for the retail, transport, and banking industries. Her areas of expertise are time series prediction, computer vision, and MLOps.

Eric Ross is a senior technical product manager at ODAIA. After spending five years working internationally in the oil and gas industry, Ross completed his master of management in artificial intelligence. Ross then joined the life sciences industry to own the product development of a customer data platform infused with AI and BI. 

Top 10 Essential Elements of a Successful DCS Upgrade or Migration

Modernizing your distributed control system (DCS) requires justification, an understanding of available technologies, and a systems integrator who has a deep understanding of your industry. DCSNext is the proven methodology for successful DCS migrations that takes you to step by step through the entire lifecycle of your plant.

(Picture Courtesy of MAVERICK: A Rockwell Automation Company)

This Top 10 list will help replace fear with facts as you consider an upgrade of your existing DCS, or a migration to a completely new platform.

View Mavtech Migration Top Ten List in PDF

1. Lifecycle Solution — A DCS upgrade or migration should deliver value throughout its entire lifecycle.
A successful migration project begins with strong front-end loading (FEL) that includes planning and budgeting with full lifecycle cost estimates. This best practice approach will help you implement your plan efficiently, locking in production gains through ongoing technical and operational services.

2. Solid Planning — Collaboration and planning are key to capturing process knowledge.
Involve your engineers, operators, IT personnel, and maintenance technicians in the FEL planning efforts. With everyone working together, you can build your project plan based on the FEL process, which is a best practice approach that uses a drill-down effect for efficiency. The resulting cutover plan – moving from the old to the new system – will minimize risk and maximize operational uptime.

3. Resource Availability — Choose a platform-independent automation solutions partner who understands your people, process, and technology.
You need a strong, qualified team, and your chosen third-party partner should be composed of migration project experts, able to sit down with you and bring their extensive experience and fresh ideas to the table. The right partner can help define your goals and be there at every stage. True collaboration ensures efficient communication and minimizes rework to keep you on time and on budget.

4. Funding — Develop a phased approach to spread the capital investment over a period of years.
Ask for budgetary estimates and total cost of operation (TCO) figures for different migration paths to get your funding approved and maximize ROI. Your trusted project partner will help you define the sequence of phases that best aligns with your facility’s needs and requirements. Strong upfront planning and realistic budgeting is a best practice that leads to a successful migration project.

5. Buy-in — Form strong partnerships with key internal stakeholders.
It is critical to keep everyone actively participating and engaged in the project to ensure a strong sense of ownership going forward. Early buy-in from operators and maintenance technicians is critical because they will work with the new system every day. A team approach ensures a high level of success overall.

6. Objectivity — Remove bias from the decision process.
A new control system platform is a major investment and is critical to operations. It is extremely important to have objective and unbiased vendor comparisons when it comes to controlling system platform selection. Be sure to involve all key stakeholders in upfront discussions, vendor evaluations, and project planning. It’s the only effective way to consider all your options and make the best selection.

7. Leveraging the Legacy — Preserve and leverage the positives of your legacy system.
Your existing system has many elements worth saving that can be combined with the new technologies being implemented. Much of your intellectual property and process knowledge can be incorporated into the new platform. This can reduce development costs while adding all the operational and safety features of the new system.

8. System Integration — The new or improved DCS must connect on many levels.
An effective partner will help integrate your improved DCS with other third-party systems. Required for operating and managing the facility, these systems often get overlooked until later in a migration project, which can be costly. The same improvements in HMI graphics and alarms incorporated into the new DCS can be extended to the information coming through these interconnections, improving operational effectiveness.

9. Risk Mitigation — Define your risks upfront, then eliminate them.
A systematic analysis early in the planning process can identify potential risk areas upfront. You and your chosen third-party partner should consider safety, downtime, resource allocation, network traffic levels, data integrity, cybersecurity, and other critical factors while there is still the greatest flexibility to deal with them.

10. Need-based Solution — Don’t assume you know the best approach for an upgrade or migration before you’ve studied the problem or potential roadblocks.
Consider best practices from a variety of industries and all viable options possible. Most important, carefully choose a savvy partner able to utilize their experience and technical depth to help you sort through all those decisions. You’ll get a custom-fit solution based on your needs, not the needs of the technology.

(Courtesy of MAVERICK Technologies: A Rockwell Automation Company)

Pressure and Temperature Instrumentation Best Practices

When examining facilities and processes, users often see many opportunities for improved operation and efficiencies. They also might wonder how to better prepare their process for the next level of performance the future will inevitably require.

Fortunately, many locations are better prepared than they realize to seize opportunities for improvement and that the task of future-proofing is already done. The information many locations need could be hidden in plain sight. Stranded instrumentation data sits waiting to be used, connections among systems need waking up, and data connectivity and analysis tools reside within reach or are simple to add.


Data is closer than you think

Begin by looking at temperature and pressure instruments for data that could lead to efficiencies and savings. For example, some advanced temperature transmitters (Figure 1) provide additional information apart from the process temperature variables. They contribute diagnostic alerts that can be included in an ecosystem of insight, providing immediate benefit as they alert maintenance teams if the device, the sensor, or even the process needs attention.

Figure 1: Some advanced pressure transmitters provide additional information apart from the process variables.

In addition, a particular advanced digital sensor offers more than a single process measurement. The sensor simultaneously measures pressure and differential pressure, both of which can be accessed if one looks beyond the limiting factor of the 4-20 mA signal. The sensor data is abundant enough for algorithms or artificial intelligence (AI) to detect adverse process conditions such as pump cavitation or plugged impulse lines.

Data also can be found in devices that have been untouched for a while. Some field instruments are so reliable that they become “set and forget.” In some cases, that is the most economical solution, but realize that such an approach could leave value on the table. These forgotten devices may be the superstars of digital transformation efforts.

After reviewing the data the current devices provide, think about how that data can be best used. When eventually adding instrumentation, first consider how robust and reliable the devices are so they continue to give value for a long time. It is also imperative to measure their sophistication—the data they gather—so information continues to be harvested for future opportunities.


Tap into stranded data

Tapping into data collected by devices is important to the goal of process improvements. Standards such as HART communications and FDT technology can simplify the job.

Communication standards. Communication protocols enable device interoperability, which means an array of devices from multiple suppliers can be used in a single facility often without the added expense of translating that data for each system using it. To help future-proof installations, look for devices that communicate using an open standard protocol (Figure 2). It is acceptable to have more than one standard in a given facility because many tools support multiple protocols. However, using too many different standards will place a greater burden on instrument and maintenance technicians.

Methods or mechanics. The mechanics required to tap into data range from straightforward to complex. Once the data is unlocked, the information can deliver benefits throughout the organization. Accessing stranded data is linked to one of three primary methods:

1. Native: When native communications are used, a simple license or software checkbox in the control system or data collector can begin the process of gathering information from field instruments. Though some system vendors charge an additional software license fee, this is usually trivial next to the simplicity of accessing the data.

Figure 2: Communication standards simplify device data integration.

2. Software: Data sharing sometimes requires users to add a software plugin to the system. These may be provided by the automation supplier or developed by a third-party software company. Older systems may require a custom solution, while newer systems rely more on standard protocols. FDT technology, for example, is an open standard for the integration of industrial automation networks and devices. Device suppliers provide a device type manager (DTM), which functions as a device driver to interpret information from the device.

3. Hardware: The third way to access data involves incorporating hardware such as gateways, multiplexers, or other edge devices. While the additional hardware and maintenance cost more, there are increased benefits to this approach. Dedicated hardware channels may provide faster communications than a native host system channel, which must prioritize control and safety data over asset performance and health.

Standard reduces data overload. Once the gates have been opened, a wealth of information is made available. Sometimes the amount of existing data can be overwhelming. Look to NAMUR NE107 for assistance. NE107 is a way to structure data into categories based on severity and simplify how data is delivered (Figure 3). NE107 helps users better understand an issue’s severity and helps users prepare appropriate responses.

Figure 3: Four categories from NE107 that simplify data organization.

An asset management system will be able to display alarms in the appropriate category, enabling technicians to recognize the severity of device alerts and take the necessary corrective actions. Due to innovations, many pressure and temperature devices—some possibly already in place— sort the data into NE107 categories: check function, maintenance required, out of spec, and failure. When used correctly, this information can improve operations and maintenance by helping technicians prioritize troubleshooting and repair work.


Put data to work

Once the data and the connections are set, plans can be created to put everything to work. Consider the facility’s improvement goals and what data could move the team toward them. For example, digital diagnostics help technicians troubleshoot existing issues. Predictive warnings provide early indications of impending failures so maintenance can be planned before the excess loss is realized. This type of data can reduce maintenance costs while improving plant availability and safety.

As data is being put into action, additional instrumentation and maintenance insights can help teams move from preventive to predictive maintenance. This means that device alerts will automatically report when a device needs attention. Technicians will no longer need to be sent out on monthly routes just to check device health.

For example, to promote continued reliable process operation and accuracy, users of certain advanced temperature sensors can set threshold and frequency limits to trigger an alarm status. This alarm can be used to estimate the remaining life of the sensor, allowing users to plan when maintenance would be most effective.

If using certain advanced multivariable sensors, users can put data to work in detecting line blockages. The fluctuations of differential and static pressure, and the capsule and ambient temperature signals are continuously monitored. Statistical calculations and comparison to reference conditions can show impulse line blockage.

Raw data from these devices also is more valuable than ever. In the past, case studies and failure analysis would guide software teams to develop algorithms allowing smart pressure and temperature transmitters to detect a specific issue or failure mode. Now artificial intelligence and machine learning (AI/ML) can replace these specific algorithms and begin detecting a wider range of potential conditions that may affect the process.

Note that users can choose from a wide variety of tools, from basic to high-end, that can help gather and analyze data. Open, enterprise-level asset management software for both automation and production assets can contribute to improving the quality of maintenance plans and optimizing maintenance costs throughout the plant lifecycle.

Imagine the changes, and plan the route As users discover the data and tools they already have—or make a few adjustments to obtain—they must think about the opportunities that await. As they plan their actions, they must look for robust instrumentation and tools that deliver performance today and continue to perform well into the future. Data is only accessible from a device that continues to operate reliably and communicates with the right protocols.

(Courtesy of ISA Automation and Author Nicholas Meyer)