As the head of Global Commercial Operations for Uptake, Ajay Madwesh created a five-part video series on thought leadership. This is the fourth installment of the thought leadership series discussing the Application of AI/ML in Operational Environments.

Read the video transcript for your convenience:

Often, you see systems integrators and even clients think of just throwing data scientists at a problem. These data scientists may have statistical backgrounds, customer behavior backgrounds, etc. They have a set of tools. They know Python, and they know Scala. They know a bunch of different programming scripting and programming languages. But the nature of Data Sciences is to build AI/ML models in the operation space is very different.

For example, we think of the ability to do machine learning-based anomaly detection.

You’re thinking about millions of data points coming in and doing simple clustering analysis; segmentation analysis is not sufficient because the underlying data sets have something about called a state. Each of these plants may operate in different states at different points in time.

Those data sets coming in embody that change in state. You must extract that state before applying machine learning to that environment.

Fusion provides that data and context. The context allows the AI/ML models or any of its front and filtering to extract features that are state-oriented and features that are not stated-oriented. This is an advantage to clients who want to build advanced models on the operational side of the world. That’s an important position for us at Fusion to take, and also, that’s an important position for the clients to understand. That’s one of the many things we explain as to why Fusion fits into their environment much better.

As the head of Global Commercial Operations for Uptake, Ajay Madwesh created a five-part video and blog series on thought leadership. This is the third installment of the series discussing how OT/IT Integration is not just about Data and Technology.

Read the video transcript below for your own convenience:

Often, it’s mistaken that OT/IT is all about just organizing data into databases. But, in reality, OT (operational technologies) is not just about data.

IoT

People talk about IoT (Internet of Things), But what are the “things” that they talk about there? The “things” are things like compressors or turbines, or valves. These “things” are not the same as my cell phone – which is also an IoT device. So if the market generally talks about OT/IT integration as something in that you put all the OT data and the IT data in one place, such as a Data Lake – do you achieve OT/IT integration?

OT/IT integration

In reality, those data sets don’t really mesh together as easily. Until you completely understand the nature of the OT data itself. It is very hard to merge that data into the IT data sets.

One of the things that Fusion does is bring the context of these data sets into the cloud. That allows for a meaningful merger of the data because it retains the actual nature of the site, factory, plant, etc., in the data itself. This allows for a better meshing of the data from IT systems into the Fusion data, which ultimately leads to a much better understanding of the operations along with the IT data.

So our position at Fusion is that it’s not just about data and technology but rather about understanding what OT is and contextualizing that OT data.

As the head of Global Commercial Operations for Uptake, Ajay Madwesh has created a five-part video series on thought leadership. This is the second installment of the thought leadership series discussing the topic of Sustainability Integrated Operation.

Read the video transcript below for your own convenience:

Today, we think of sustainability as something that you have to do. The future of sustainability is an integral part of everything that you do. This means that the data sets needed to achieve sustainability are not something you do as a separate function but rather as an integral function.

Fusion allows you to bring the data needed for a better sustainability analysis. Fusion helps you know where the energy consumption is the highest and the methods to consume less and less energy or emissions.

For example, imagine yourself as a gas turbine manufacturer or gas turbine generator. How do you minimize? How do you operate the gas turbine in its sweet spot, producing the most energy with the least natural gas consumption?

All this precludes that the time series data, the operational data is an integral part of those analyses.

Our belief at Fusion is that today sustainability is being treated as something that you have to do on the side. But, over time – within the next three to five years, sustainability will become an integral means to operate, whether it’s a plant, factory, etc. – that is the future we see. Fusion allows us to help clients who want to make sustainability an underlying principle of everything they do.

As the head of Global Commercial Operations for Uptake, Ajay Madwesh created a five-part video series on thought leadership. This is the first installment of the series discussing Industry X.0 and Large Scale Cloud Compute and Analytics.

Read the video transcript below for your own convenience:

What is Industry X.0?

When all the consulting companies also refer to the general transformation that’s happening on the operational side as Industry X.0. It used to be known as Industry Forward.0 But as time passes, it continues to change and is now known as Industry X.0

What that really means is the ability for all the factories and sites to reorganize themselves. Think of a discrete manufacturing industry where the demand is very, very volatile. What they manufacture today will not be what they manufacture tomorrow. The demand patterns change quite rapidly. The realignment of all of that is Industry X.0.

Industry X.0 has led to the need to have a very strong understanding of changing the product lines to meet the demand – aligning the product lines to the demand quickly.

Industry X.0 and Fusion

Fusion is a fantastic solution because it is the single source. Even if the systems are distributed, Fusion can still be the single sourcing agent to bring the data into the cloud.

There is a voluminous need for data in the cloud to be able to do things like: Supply Chain Management, Demand Management, and Aligning Demand to Supply. These functions are a part of the domain of Industry X.0. This is where Fusion comes into play. Fusion is a fantastic single-source solution to solve that problem for our clients.

As the head of the process products portfolio at Uptake, I will provide some background on a use case that Oil & Gas EP companies may be interested in. I will discuss how those companies leverage Fusion to streamline and support their activities with data.

In the Oil & Gas industry, operating companies are looking to modernize completion and field production activities, such as monitoring fracking activities with well site data. The required data comes from internal operational technology systems and third-party suppliers.

Well fracking Monitoring with Fusion Data Hub

The Challenge

The combination of this data provides complete visibility of what is happening at the intervening and neighboring wells that are often operated by other operating companies. Visibility is required due to the risk associated with unexpected changes in pressure or other relevant hydraulic fracturing responses.  These can lead to environmental, safety, and productivity impact.

During a fracking operation, data from the operating companies’ time-series historians and SCADA/PLC systems, such as well head controls, are required to understand how the wells are performing. Due to the fact that during fracking, drilling equipment is used, the onboard drilling systems and sensors (that collect things like depth of penetration, pressure, and flows as they frac) provide better visibility of what is happening in the well. In addition, a third-party surface pressure monitoring service supplier may be contracted to establish monitoring and sensing capabilities in the operators’ wells and neighboring wells from other operating companies. All this information needs to be recorded and become proof of due diligence to comply with regulatory requirements such as AER Directive 083 and Enform’s IRP 24.

The different types of data from the parties involved follow different formats and naming conventions and are accessible via different applications offered by each of the individual 3rd parties. For an operating company trying to monitor activities and keep the completion and field production teams informed of what is happening during and post a frack becomes a real challenge and a constraint to the business.

The Solution

An Industrial Analytics Data Hub that extracts data from multiple sites and corporate historians such as OSIsoft PI and Rockwell FactoryTalk. As often these systems are configured to capture a fraction of all the data from SCADA and control systems, connectivity & unconstrained data capture to these systems is a must. Some typical SCADA and control systems include Schneider Electric GeoSCADA (aka ClearSCADA), Ignition from Inductive Automation, and Rockwell Automation. Due to the fact that these are critical systems to operation and the environment, ensuring the security and performance of these systems are maintained.

In terms of systems that support geographically dispersed assets, the data often is received as delayed batches from RTUs due to communication latency and bandwidth. The data in those delayed data sets may include unexpected and sudden changes in pressures. Delayed data often takes a direct path to archives and does not get published as snapshots. Thus, the solution cannot rely only on the most recent values, and we need to extract delayed data.

Drilling supply companies such as NOV and PASON offer data collection and analytics as a service to clients. They deliver data to clients using a publish and subscribe approach via readily available cloud APIs. Surface pressure monitoring service suppliers such as ABRA offer also the ability to deliver offset monitoring data via cloud APIs. Thus, an Industrial Analytics Data Hub must be able to support the ingestion of the complex messages received from these organizations and merge with data collected from operators systems in parallel.

Unless users have a good subject matter expert understanding of the wells and the operations, the data and messages received from these third-party providers can be difficult to understand. As a result, various analytics teams often cannot relate to the data as they do not follow industry or company-wide guidelines, such as WITSML and industry-standard well-naming conventions.

In order to correlate data from these various systems, an Industrial Analytics Data Hub must provide the ability to normalize and organize the data according to the needs of the fracking activities. The number of wells involved during a frack operation varies from job to job, so an Industrial Analytics Data Hub must be able to ingest data in a dynamic fashion and allow authorized users to extract all data that is relevant to their interests. Systems that do not use graphs to model data are often limited to reflecting these dynamics and will not help get the frack operating team easy access to the data they need.

A one-fits-all solution is not ideal; most organizations are already leveraging existing tools such as PowerBI, ArcGIS, and/or existing services from their drilling providers. A rip and replace can be a risk involving lots of change management so it is imperative that an Industrial Analytics Data Hub is able to integrate and openly deliver the right data to those existing consumers.

If you can relate to the above challenges, Uptake Fusion is a key enabling component of your overall solution of implementing an Industrial Analytics Data Hub.

The Value

In summary, companies looking to streamline operating activities and processes can experience measurable value with Fusion as follows:

  • Instead of waiting days to discover what occurred, it allows operating companies to enhance collaboration and communication by providing early warnings of sudden, unexpected changes in pressure or other relevant hydraulic fracturing responses much faster to the teams involved.

  • It also improves data accessibility and analysis during frack and post-frack to determine root causes and potential actions from a single source of data rather than multiple dispersed applications.

  • It offers a central repository to capture records from all parties involved for due diligence and regulatory compliance reporting purposes.

  • It helps operating companies to capture knowledge in the form of monitoring rules, recommendations for benchmarking, and continuous improvement analysis and reporting.

  • It enables operating companies to share data in a more collaborative manner with their drilling and monitoring service suppliers.

  • Lastly, it provides a foundation to manage your industrial data from your cloud and prepares you for growth into other monitoring areas, such as water, and asset performance monitoring, as the same data is often required by other teams and applications.

Since Microsoft made the decision to stop development on Time Series Insights (TSI), Uptake has been working to replatform our flagship OT cloud data historian, Fusion, on Azure Data Explorer (ADX). The ADX team has worked with us every step of the way, and we have been able to help them in our own small way, too.

At Microsoft Ignite, ADX announced their new data trender interface “Kusto Trender”, available on GitHub. The trender provides ADX users a data exploration experience very familiar to users of TSI Explorer. The Uptake Fusion Team are proud to say that we participated with Microsoft in its development. The trender provides an out-of-the-box data schema and user experience for ADX application developers and users, that they can use as-is or extend/modify to meet their needs.

With the integration of the Kusto Trender, our Fusion product – an Azure-native OT data store – is now much farther along on our journey to being the full-featured cloud-native OT data historian that our customers need. Adopting ADX as our storage engine was a huge step forward in terms of performance, scale and data flexibility. This trender now allows us to provide the rich interactive data exploration capability historian users need.

Data is a top priority for many in the asset-intensive industries. Digital transformation and sustainability run on data availability.

Integration in the cloud supports wide-ranging initiatives from predictive modeling of oil well maintenance to reliability-centered inspections and ESG reporting.

With data open for consumption, teams have the flexibility to develop industrial intelligence for smarter, more efficient operations. However, that isn’t always the case.

There are a few common blockers to making data available in the cloud.

  • Data access
  • Insufficient data
  • Data standardization
  • Lack of context
  • Data quality

Data Access for Industrial Intelligence

Data scientists and engineers gain a foothold on data when quality, context, standardization, and retrievability are guiding principles for industrial data management.

As at many companies, time-series data at Chevron is first collected in operational technology (OT) and supervisory control and data acquisition (SCADA) systems.

Seeing this opportunity for enterprise-wide monitoring, reporting, and analytics, Chevron’s Time-Series Services needed to provide data scientists and engineers with easily retrievable operational data that could facilitate continuous improvement of data quality, context, and standardization. In addition, the scale of transformation required Chevron to seek a new approach.

Chevron turned to Fusion to get past these challenges and to put 10 years of historical data and counting into its data lake on Microsoft Azure.

Design Features of On-Premise Systems

There are a few standard approaches to data movement to the cloud, but each has its drawbacks. In general, OT and SCADA systems work well as repositories of data aligned to specific business units. However, they lack the extensibility the cloud needs for Big Data initiatives, like Industrial AI/ML. The common blockers noted above arise for a variety of reasons. Some common causes of challenges include:

  • Paywalls: licensing and user permissions make data sharing in the cloud expensive
  • Firewalls: Cybersecurity concerns keep data in on-premise systems
  • Data compression: Movement of data to the cloud takes a toll on data quality and the resulting analytics, often due to the loss of metadata
  • Wrangling & feature engineering: Data scientists and engineers, instead of prioritizing value-added work like the development of data science models, slog away at manual tasks so data is consistent and complete

Moving Industrial Data to a Data Lake

With Fusion, approved consumers leverage up-to-the-date data to stay on top of operational activity. Built-in tools from Microsoft Azure like PowerBI allow data scientists and engineers to easily develop dashboards for monitoring and reporting. Fusion also can feed data into Uptake Radar to configure and visualize analytics, workflows, and dashboards for decision support.

Data consumers are able to take advantage of more sophisticated analytics like Industrial AI/ML and digital twins because Uptake Fusion preservers metadata in an open and secure format when it moves OT data to Microsoft Azure.

BY TEAMING WITH UPTAKE AND MICROSOFT TO CONNECT AND ALSO AUTOMATE COMPLEX DATA, WE ARE ABLE TO UNLOCK THE INSIGHTS TO HELP CHEVRON DELIVER ON HIGHER RETURNS AND LOWER CARBON FOR OUR ENERGY FUTURE”

— ELLEN NIELSEN, CHIEF DATA OFFICER, CHEVRON

Realizing a Lower Carbon Future at Chevron

By deploying Fusion, Chevron leverages its data at scale to develop intelligence that unlocks valuable outcomes for their business and environmental, social, and corporate governance (ESG) initiatives. It helps them to maximize output and forecast production, improve revenue growth, and advance its sustainability initiatives.

See How Chevron Puts Data to Work

Industry 4.0 is powered by data.

As companies make applications like Industrial AI/ML, digital twins, and operational orchestration a central part of their operations, they are exploring new operating enabled by the core principles of Industry 4.0 — interconnectivity, cloud computing, AI/ML, big data management, and user adoption — to guide the way. In this blog, I’ll cover how thinking about process-intensive assets and systems in terms of data requirements sets up organizations for future success and flexibility. It is an important exercise as companies establish their asset analytics strategy.

Anchoring Asset Data Governance in the Asset Administrative Shell

When I talk with prospects and customers about use cases for asset performance management that tackle their digital and sustainability initiatives, the discussion usually turns to data availability, and the time and availability of subject matter resources to consolidate the data. They need widespread and secure data with the right domain context from various sources to establish and support these initiatives. It can be costly though: on average, data scientists report that they spend 39% of their time wrangling data. I use this slide to explain some of the differentiating capabilities of our Fusions product and how it makes asset analytics possible and cost-effective. This graphic depicts Industry 4.0’s Asset Administrative Shell.

The asset administrative shell describes the many types of data and visibility provided by a connected asset in a process-intensive environment like oil and gas, manufacturing, energy and utilities, and mining and metals.

The submodels reference categories of collected data to support various asset lifecycle functions such as engineering, operations, maintenance, reliability, and energy. These submodels are virtual representations stored in the cloud for various analytics as canonical models, often shown as hierarchies or graphs.

Submodels represent key value levers that measure, control, and manage strategic goals, which are rooted in key performance indicators like overall equipment effectiveness (OEE).

At the operational submodel level, the focus might be on production and quality. At the maintenance submodel level, the focus is aptly on maintenance, availability, and cost.

Each submodel organizes asset data from various sources and references a combination of design and manufacturing technical specifications, sensor data, alarms and events, maintenance, and work order data.

A key benefit of establishing a submodel strategy as per Asset Administrative Shell is that it allows organizations to implement a better structure for asset data governance and stewardship. This organization allows organizations to explore new business models with their analytics suppliers. New models whereby original equipment manufacturers, specialized service providers, and even technology providers such as Fusion can maintain submodels as part of a SaaS-enabled managed service offering.

Shared Data Access for Industry 4.0

As Industry 4.0 technologies like digital twins and AI/ML gain more traction, asset-intensive companies that scale their initiatives will need to decentralize data management. Leveraging the submodels is key to making disparate data consumption possible by various stakeholders across the organization. Since the management of these submodels involves various stakeholders in and outside the company, data access and consumption will need to be flexible as well. And going forward, as companies eye autonomous or self-operating assets, decentralized data management will be fundamental. Without a human go-between, the asset administrative shell functions as the store of data (knowledge) about an asset or system. These submodels (or administrative asset shell as a whole) enable the adoption of other emerging technologies, including AR/VR to enhance the human interactions with these assets — either from remote locations or from the field.

Bringing Data Together with Uptake Fusion

But even before that organization of submodels can happen and which is necessary to enable Industry 4.0 technologies, data governance begins with bringing together operational technology (OT) and information technology (IT) data in the cloud. It provides the scale of data consumption that enterprises are after. We designed Fusion to be the flexible foundation to extract and store OT and IT data, manage these submodels, and in turn for industrial data analytics. If you’re not familiar with Fusion, I would point you to this video below that offers the highlights.
At a very high level, Uptake Fusion securely extracts industrial data – including from on-premise systems — and moves it to the cloud for organization and curation with context (like maintenance work orders or metadata). This cloud integration has been par for the course in IT for some time. For OT, due to complexity and costs, not so much. With Fusion, data consumption doesn’t depend on a per-tag or license basis, or the number of deployment environments. Instead, it’s based on usage. The cloud environment of the organization allows authorized users to develop the industrial intelligence they need to improve operational performance, as well as environmental, social, and corporate governance (ESG) initiatives. In that way, it’s a key component of the data backbone for digital transformation and sustainability.

Data Traceability & Asset Lifecycle Changes: Pumps as an Example

Traceability in this shared framework of data management is especially important. As asset lifecycle changes come into effect, where these submodels are located in the cloud environment and who modified these submodels become relevant. As an example, take a horizontal multistage centrifugal pump system used for onshore crude oil transfer.

Maintenance and reliability for pumps come as a priority for many organizations.

Over the course of the pump’s lifecycle, repairs, refurbishment, and replacements influence performance. For any authorized user to understand the productivity of and risk to oil transfer that is associated with these changes, this activity should be recorded and preserved.

With improvement and reporting needs spanning businesses, organizations depend on a contextualized and on-demand view of asset data to support decisions. If the motor attached to a pump were replaced by a better technology, this change will impact the efficiency and performance of the pump. The change could also influence energy consumptions, quality, and yield improvements.

Most organizations replace the motor in their monolithic asset framework model without tracking and making these lifecycle changes available. As a result, many of their reports will not be trustworthy if their time span expands beyond the repair time — a different motor was in place. This will also impact analytics results as the biased data for the previous motor will impact the analytics for the new motor, and new overall pump predictions and risk profiles.

Launch your Asset Analytics Strategy for Industry 4.0

As Industry 4.0 skills like data and digital expertise become fundamental to operating an industrial business, companies should also note the tradeoff in skills. People running site rounds and inspections are looking to their employers to store and develop subject matter expertise and best practices.

Subject matter experts and AI/ML-proficient resources are limited, and they need to be allocated to high-priority items that impact their critical production assets. Building your asset analytics strategy for Industry 4.0 means splitting the problem and delegating activities to third-party data consumers that can provide insights to your team.

Fusion provides asset analytics for ancillary (or “balance of plant” as known in power engineering) assets such as pumps and transformers. Maintaining an in-house focus on critical assets allows teams to reallocate your expert time to highest priority activities concerning your production assets; yet it maximizes their time as Uptake delivers simple insights and supporting evidence to enhance their decisions and prioritization of activities.

The dynamic for decentralized data consumption will be key as companies collaborate to address the knowledge gap, which includes equipment and system maintenance. Digitized subject matter expertise unlocks and accelerates the potential for digital initiatives and to improve throughput, cut unplanned downtime in half, and substantially improve labor productivity. It’s also one that the administrative asset shell — as a part of the backbone for digital transformation and sustainability — can fill.

I have spent my career connecting data to decisions. Every company is looking to maximize the value of their investment in data within the constraints defined by their value levers, whether that is Safety, Carbon Footprint, Productivity, Efficiency, or Quality.

Fundamentally, companies are looking to reduce variability in their decision-making so their actions at every level — from strategic to tactical and operational, including the control of the industrial assets — allow them to better predict their outcomes.

When I talk with clients in these industries such as mining, oil, and gas, I use this slide. I think it does a good job of communicating our shared goals.

The value chains of industrial operations and industrial intelligence

What we’re doing at fusion with industrial intelligence is not so different from their daily routine, albeit at the frontline of their site — a refinery, mine, assembly line, power plant, or some other industrial setting. We extract, transport, process, store, and distribute valuable assets.

The Value Chain of Industrial Intelligence

Like our industrial customers, we take raw materials and refine them. In our case, that just so happens to be data. It’s a comparison our Executive Chairman has made before — one between data science and shale production. We blend data, raw materials, with other catalyst materials (our Asset Strategy Library® or other knowledge and information). We process them using advanced analytics (empirical and statistical using first principles and AI/ML) and produce a refined product (our insights and recommendations) and byproducts (patterns, trends, and metadata for machine learning models — ultimately improving predictions for equipment performance). Data doubles as a mirror for industrial activity, but it has the good fortune of time (and math and physics) to give customers a dynamic view. Our customers get advanced visibility of future performance and an enhanced perspective of current and past behavior.

Pumps as a Model for Industrial Intelligence

Take pumps an example. In the oil sands industry, pumps are present in bitumen extraction, diluted bitumen product tanks, high-conversion refineries, upgraders, and simple refineries. A pump can be used for condensate extraction. For this application, the pump system removes steam from the feed system by generating enough pressure. It then delivers the excess system to a deaerator, helping to dewater the working environment and keep personnel safe and the operation productive. However, the same type of pump can be used for steam recovery for power generation or for water treatment at a refinery.

Common asset types prevail across the heavy industries, but operating contexts vary.

The environment in which operators use equipment shapes its performance. The pump manufacturer may recommend a course of preventive maintenance, but since the pump was intended for use across industries — including at indoor sites — the operating context is often a missing or only partial aspect of the maintenance approach. This context is also an important consideration for predictive maintenance and analytics. Fusion leverages its Asset Strategy Library to add this context to the blend of field sensor, work order and maintenance data. Based on the stressors that are present in the operating context of the pump, Uptake can predict risk and future failures as degradation starts to happen.

Maintenance and Reliability for Pumps

Pumps represent just one source of challenge and opportunity for maintenance and reliability professionals. Still, they can pose significant and costly problems. In an Uptake customer survey, 70% of industrial organizations identified pump systems as being a top risk to productivity.

Short-handed maintenance and reliability engineers have to maintain asset types in addition to pumps. Companies need a way to prioritize maintenance work activities to ensure lower risk and higher reliability of their pump systems.

In many cases, multiple original equipment manufacturers (OEMs) for pumps complicate monitoring the performance of the asset or its components at a system, site, and fleet levels. That makes managing various pump systems challenging — because of different suppliers, operating systems and conditions, maintenance practices, and spare parts availability (and especially so with supply chain disruptions.)

And expensive, too: pumps can consume around 30% of the maintenance budget for process-intensive operations.

Theory of Constraints Approach to Industrial Intelligence

The theory of constraints has been used across industries as a methodology for identifying the most important limiting factor that stands in the way of achieving a goal and then systematically improving that constraint until it is no longer a limiting factor.

It’s a relevant approach to pumps, as well as other industrial assets. Companies have a large number of critical assets and a limited number of resources in capital and personnel.

By scaling insights on ancillary (or balance of plant) systems, including pumps, using software provided by Fusion, your maintenance and reliability teams have the support to make more holistic decisions about operations. Address both the complexity of their most critical production systems and overall site conditions.

There is no one-size-fits-all approach to asset performance management. The value chain of industrial intelligence, and the theory of constraints approach, sees to that.

We build up a buffer of high-quality insights to help your team focus on the right priorities to maintain consistent production and a steady operation. Thus, we can liberate and maximize the allocation of a team’s time to higher-value activities. Through industrial intelligence, maintenance and reliability professionals can validate and optimize their approach to analytics.

Focus on Parallel Value Chains

This risk-based approach considers safety, environmental, productivity, efficiency, and quality conditions to better balance the key value levers. It truly reflects the bottom-line value.

Pressed for time and resources, companies need decision support — industrial intelligence — that caters to the industrial assets under management. Instead of drilling for oil or mining copper, industrial intelligence mines for data that enables and delivers insights, per a company’s requirements, so that they can produce higher-quality products faster.

Today is Earth Day in the US, a day set aside since 1970 to show our support for environmental protection. Companies in energy and manufacturing are answering the call to be more sustainable and embracing environmental, social, and corporate governance (ESG). Wind energy just surpassed coal and nuclear power in energy output. Big energy companies are pursuing lower-carbon energy sources. By 2024, 70% of manufacturers will invest in software tools that support sustainability. While publicly-traded industrial companies are transforming their organizations to be stewards of the environment and transparent in their operations, it’s a steeper challenge for the market majority and laggards to change and meet ESG reporting requirements. In fact, in one global study, nearly half of 1,000 institutional and whole investors surveyed said a lack of robust data is holding back their organization’s further adoption of ESG goals like net-zero and lower carbon. The Securities and Exchange Commission (SEC) is also proposing a sustainability disclosure.

Fortunately, as this video shows, industrial companies in energy and manufacturing can combine solutions from Fusion and Microsoft to be in a better position for ESG reporting requirements. What better day to highlight it than Earth Day?

ESG Checklist

Put industrial asset data in the cloud
For a consistent system of record, your asset data must transfer from source systems to a cloud environment that is ideal for advanced analytics.

Keep data confidential
There is a cyber security risk with data. Look for technology that can protect your data on its journey to the cloud and can leverage the privacy of your own cloud environment.

Deliver on providing an audit trail
With data from sensors to the boardroom verifiable, it’s much easier to attest to the accuracy of the numbers cited in ESG reports. Don’t be accused of greenwashing.

Organize data for specific ESG requirements
ESG requirements vary from industry to industry. Depending on your company’s situation, you need data organized to your specific ESG requirements.

Achieve data granularity for smarter decision making
Data granularity refers to the detail in data structuring like time intervals in time-series data. Granularity is critical to the deeper understanding required for decision making.

Bring accountability to sustainability without giving up profitability
On this Earth Day, we celebrate business efforts to be more sustainable. Energy and manufacturing are just two industries undergoing dramatic changes to operate cleaner and greener. The way forward is to bring accountability to sustainability, so your company can advance sustainability initiatives and meet ESG requirements, while also pursuing profitability goals.