832-689-8600 info@prolytx.com

Blog

The Truth About the “Easy Button”:

How to Accurately Migrate Massive Amounts of Engineering Data in Minutes

Data is an asset, and operators know that digitizing and moving legacy engineering data into company-managed platforms and maintaining quality data-centric systems in-house is necessary to ensure competitive advantage and mitigate risks. However, digital transformation can be a tedious and arduous process, not for the faint of heart. The problem is that legacy engineering data is typically not structured in neatly labeled columns and rows or rich with metatags as it is in many other domains. Instead, engineering data can be in the form of PDFs, CAD files, Excel workbooks, Word documents, napkins, and other ambiguous formats. In most cases, these unstructured documents or renderings have little or no metadata at all.

To add to the confusion, not only does legacy data come in various file types, but even within the same file type, there could be numerous formats for the same asset. Consider what happens when digitizing data for current assets, such as control valves. A facility could have hundreds of control valves installed within the facility by several different vendors or contractors, each with their unique way of classifying and labeling the specifics of the valve. This leads to the same data point for each valve being labeled differently across the specification documentation. One may call a data point “type,” the other may call the same data point “material,” and yet another might call it “metal.” Before this data can be migrated to digitize the facility, it must be formed, mapped, and rationalized. 

There are traditionally two approaches to solving data rationalization: manual and automated. Unfortunately, these approaches have their flaws in the outcome and the way progress is measured. Meaning, metrics are set to assess how quickly the data moves from one system to another with no appreciation for the time it takes to properly tag and map the data.

First is the laborious and costly approach of hiring an engineering company to rationalize and migrate the data manually. Of the two traditional methods, this results in the highest data integrity.  However, this approach is slow, and the interruption to business can be drawn-out and frustrating taking months to complete. Additionally, because this is a lengthy manual effort with highly skilled resources, it is often cost-prohibitive.

The second is an automated approach. Automation can rely on A.I. to rationalize and migrate the data. Unfortunately, automatic data migration is often done by well-meaning software vendors and I.T. consulting companies not trained on the uniqueness of engineering data, who will naively oversimplify the amount of effort to digitize legacy engineering, facilities, and asset data. They have little understanding of the intricacies of engineering data and believe that applying the same approach as they would for other domains would work. They will say, “it is easy, we will put it in a data lake, then A.I. and Big Data programs will do the work.” But is that true? Technically, yes, the data may be there, but can you find it upon search? And, in the cases of documents and drawings without metadata, the difference between putting them in a data lake and a black hole is negligible. Additionally, the lack of engineering-specific knowledge prevents these vendors from being able to see issues upon review. The outcome is low data integrity resulting in low trust and increased risk. The cost is lower, but the resulting clean-up of such an approach is expensive, with overall costs exceeding that of the more laborious manual approach.

There is another approach that marries the two traditional methods. It utilizes technology to automate the transfer, but engineers to rationalize data and map the outcomes. The result is a migration with high data integrity at a cost far lower than the traditional manual approach, with brief or no business interruption. The secret is in the upfront mapping and rationalization. By assessing each of the data formats and intelligently tagging and mapping each to the new digital environment with sample testing along the way, the result is quality data that is trustworthy.

The pressure to measure success by how quickly the migration starts and judging the progress of the migration and not rationalization are mistaken. Patience is key. The adage, “measure twice, cut once,” applies. Invest the time in intelligently mapping the data, and the migration should be as simple as hitting an “easy button.” We recently moved over 5.6 million cells in less than 18 minutes for a client, but it took three months to map and verify. The client is not only thrilled with the high quality of data now available to its engineers and key stakeholders, but the project came in under budget and sooner than promised.

Data migration indeed can be as easy as hitting the “easy button.” ProLytX is an Engineering I.T. firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and I.T. skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between I.T. and Engineering, find us at www.prolytx.com.

Say Goodbye to Proof Testing

One of the biggest and most costly challenges of plant maintenance is proving that systems are functional and safe. Proof testing is one of the most critical aspects of plant operations and costs millions of dollars in lost production every turnaround in the name of safety and compliance.

Even though we have come a long way in how we monitor the operations of facilities, documentation, reporting, and analytics remain problematic. This is because information lies in disparate systems. Instrumentation data from the SIS and DCS are not easily accessible and are typically low-resolution by the time they hit the DMZ or business LAN; as a result, valuable trip information is left uncaptured or hidden, and the entire proof testing cycle continues. As a result, companies are currently relying on manual testing and teams of technicians to manage this in the critical path. With digital transformation and an analytics engine to drive analysis and better reporting, plants can take advantage of trip and shutdown information to reduce test crews and streamline offline testing.

A general lack of visibility means that the entire proof test cycle must run its course when it comes time for routine maintenance. Shutdowns that just occurred must be simulated again from the field for validation and sign-off purposes. If there were better analysis and reporting, the testing could be optimized, thereby reducing the critical path and saving hundreds of thousands, if not millions, of dollars every turnaround. ProLytX has partnered with industry majors to tackle this problem.

ProSMART (System Monitoring & Analytics in Real-Time) software can understand how an SIS/DCS is configured and perform high-resolution data capture and real-time analytics and reporting. This disruptive technology is transforming the way plants look at and schedule proof testing and even instrument maintenance. ProSMART removes work scope and time-consuming tests during turnarounds by automating manual reporting tasks and data collection. It also provides a dashboard for leaders to manage risk, approve reports, schedule testing, and identify bad actors.

Benefits include:

  • Real-time compliance with IEC61511
  • Automated reporting and identification of successes / failures
  • Reduction of costly testing during turnarounds
  • Improved real-time risk management
  • Cross-platform communication spanning a majority of SIS/DCS platforms

Recently, a facility using ProSMART identified $1.5MM in turnaround cost savings due to a reduction in SIS Proof Test requirements and 3rd party technician crews. During the plant shutdown, the operator captured enough data to eliminate over 20% of the planned maintenance testing scope.  Automated reports were validated by 3rd party auditors and found to be more than sufficient.  Accurate valve stroke time data and instrumentation bad actors were captured, which also allowed for targeted maintenance activities.
 
ProLytX is an Engineering I.T. firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and I.T. skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between I.T. and Engineering, find us at www.prolytx.com. 
 

Unrestricted Risk

When an operator considers cybersecurity, they tend to focus on Legal, HR, and Finance data. This makes sense; intellectual property and personal information security are essential for competitive advantage, and Dodd-Frank, SOC, and FTC regulatory compliance also come into play. But, what about facilities and engineering data? As the hosting of major Capex projects shifts from contractor environments to owner-managed environments, the access required to collaborate with third parties is already accessible through engineering applications; so, what can be gained from securing it? The better question is, what can be lost by not?

A while back, in a meeting with a client, we were discussing the vulnerabilities of remote access to engineering solutions. The client was sure that their engineering applications were 100% secure. To prove it, he provided a guest log-in to their engineering applications hosted environment during the meeting. Within minutes, our team had access to one of their most widely used application’s backend data and environment details. A few moments longer, and we had made it on their internal business network. They were stunned. This could have been a significant safety risk had these applications been hosted in part on their PSN.

So why is no one looking at this type of access as a security breach? For one, the objectives are mostly honest; the perpetrator’s goal is not typically to sabotage or steal, but rather for efficiency to their projects. After all, the data is project information and made available in the engineering application anyway. However, it is not the access to the data that is a concern; it is how the data is accessed. Problems occur when the user bypasses the engineering applications to gain entry into the backend databases. This access to the database can allow contractors to streamline their workflows or practices. This is good, maybe? But don’t let these benign intentions distract from the inherent risks.

There are several potential problems with failing to address security in these engineering application-hosted environments.

Direct and unauthorized access to the database undermines the safeguards built-in to application workflows, by-passes approval processes, and activity tracking.

  1. There is no way to guarantee that everyone is operating with the best intentions.
  2. There is no way for a user with knowledge of only one project to fully understand how data is being used throughout the facility on other projects.
  3. Changes to the underlying data can be disastrous and expensive even when the intentions are virtuous.

With today’s remote work environments and large data projects, it is common for operators to work with several contractors across multiple projects at any given time: all of them accessing the same information. The impact of changing a single data point could have devastating and costly repercussions to others, like a catastrophic failure or tens of thousands of man-hours to fix an issue. Breaches don’t have to be terroristic or competitively motivated to be threatening. They generally come from engineers merely trying to simplify their work by venturing into a restricted area without a sign or lock on the door.

By preemptively implementing an engineering application-based security solution in hosted environments, companies can prevent unwanted access, maintain better data integrity, and minimize the risk of costly mistakes and the potential consequences that are far more costly than a security solution.

ProLytX is an Engineering IT firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and IT skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, find us at www.prolytx.com.

Safer Through Automation

 Everyone in an oil and gas facility understands that safety is the highest priority, yet even with regulatory mandates and safety standards there is still room for improvement.

There have been 130 incidents at chemical facilities reported over the past 20 years1. Between, September 1995 and January 2015 there were approximately 128 fatalities and another 218 injuries at oil and gas refineries across the U.S. Of those who died, approximately 60 were a result of an explosion or fire, 35 either accidentally fell or were crushed by heavy equipment, 15 died of asphyxia, 2 were electrocuted, and 7 passed of natural causes typically, cardiac arrest.2 
Operating companies go to great expense to avoid such accidents, in fact, safety is one of the largest expenditures for an operating facility between extended downtime and labor costs for testing. Functional safety testing is usually built into annual, semiannual, or triennial maintenance test schedules. When the facility is off-line for maintenance and testing, it can cost the operator in excess of one million dollars per day in lost revenue and overhead expenses. This significant investment is a testament to the importance put on safety by corporations.
 
It goes without saying that operators are looking for a way to reduce functional safety test costs without increasing risk. The challenge is that the usual testing process is rigorous and manual, riddled with redundancy and ripe for human error. Often because of the complexity and costs, some routines are reduced to minimalistic tests of cause and effect. This simplifies the testing process but may not perform adequate testing and documentation. Some of these might meet regulatory standards, but they are not the most thorough process and often fail to account for all of the potential consequences, and in the unfortunate event of an incident, documentation can be the difference in favorable and costly investigation outcomes.
 
Maintaining the safest facility possible and regulation compliance relies on accurate and thorough documentation. However, documentation can be a point of weakness. The more thorough the test, the more complex the documentation. There can be thousands of pages of procedure to test a single system, so it is a challenge to be consistent and to keep documents up to date. On the other hand, when documentation is poor or incomplete much reliance is put on individual interpretation and the experience of the tester resulting in tests being repeated differently and affecting the quality of the tests. All of these are potential points of failure that increase risk and liability.
 
Operating companies are looking for solutions to improve overall operations and many are investing in digital transformation and automation. But, while there have been solutions introduced that aid in training the testers with the use of digital twins and simulation, there are few solutions that leverage the power of technologies to automate and comprehensively improve functional safety (validation) testing and eliminate systematic error. Of those, only one is fit-for-purpose, Test Drive by ProLytX.
 
Unlike other testing solutions, Test Drive is vendor-agnostic and is compatible with most Program Logic Controllers (PLCs). It can be deployed as SaaS or on-prem. The solution removes human error by taking a templated approach, yet still allows for human expertise and oversight with a people-approved process. The engineer-designed user interface is far superior to the multi-system, multi-document traditional methods bringing everything into a single view.
 
Beginning the Test Drive implementation process starts with a documentation audit and IEC61511 third-party review so that any issues with design interpretation and internal bias is removed. Then by establishing a repeatable automated testing procedure with consistent and accurate documentation outputs, the rigorous and repetitive process of regular maintenance and Management of Change (MoC) testing can be completed, and regulatory reporting obligations met without worry to the operator.
 
The real benefit of Test Drive to the operator is not only knowing that their facility has optimized safety for its workers and the environment, but that the solution has improved overall testing rigor while shifting functional safety testing off the critical path due to the efficiency of automation. This can reduce the testing process from weeks to days, potentially saving the company millions.
 
1 According to The U.S. Chemical Safety Board (CSB), an independent, non-regulatory federal agency that investigates major chemical incidents.
 
2 Article by Jim Malewitz, published by the Texas Tribune in partnership with the Houston Chronicle on March 22, 2015 and includes information from OSHA records, government investigation reports, newspaper archives, and legal filings.

ProLytX is an Engineering IT firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and IT skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, find us at www.prolytx.com.

Engineering IT: a Unique Skillset

It is common for companies in the engineering industry to have a corporate structure with multiple disparate business units; traditional Information Technology (IT) and Engineering are two such units. IT fills the important role of providing software and network resources across the enterprise, but their success is measured on delivery, not adoption. Their approach is a break-fix model with no real responsibility for the end-user’s success or failure in most cases. This raises particular concern for engineers providing business-critical services that rely on IT to provide many of their mission-critical tools.  However, the highly specialized nature of the engineering process presents nuances that make it nearly impossible for IT to deliver an out-of-the-box solution that the engineer or designer would actually use without significant customization and governance applied. This disconnect results in engineers having difficulty finding value in the tools that IT provides. Yet, without proper context, IT struggles to provide tools that are useful to the engineer. This gap often results in failed implementations and costly inefficiencies within the engineering workspace.

In recent years, as businesses have been challenged to deliver larger projects with global work-sharing, a number of engineers have taken an interest in improving the technology offered by proactively working in the gap between IT and Engineering to ensure that data and processes are aligned to the business and that fellow engineers are adopting the technology. These folks are the go-to people when problems occur and the ones who resolve complex issues behind the scenes, with a combination of analytics, programming, and specialized application knowledge. These individuals and the companies they work for started to recognize the value of their talent, and a new discipline was born – Engineering IT.

Engineering IT is complementary to traditional IT. It ensures not only the right tool selection but also provides coaching and structure to make them work for the engineer. This specialization brings context and applies workflows, as well as strategy and governance. It bridges the gap between Engineering and IT.

For those companies who do not have functional Engineering IT roles or cannot pull engineers from their primary functions to take on technology improvement projects, there are service providers available, but not all are qualified. Many software providers and purely IT players claim to implement engineering solutions, but few have Engineering IT in their wheelhouse. They lack a basic understanding of engineering principles, budgets, schedules, and construction deliverables. Engineering IT requires expertise in these subjects and a good functional knowledge of IT processes to provide the skills, insights, and context to be truly successful. The right implementation partner will maximize your investment and allow tools to be used to the fullest extent, ensuring user adoption and satisfaction, while maximizing investment. 

ProLytX is an Engineering IT firm based in Houston, TX and is a leader in this field, coaching clients to success with a unique combination of skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, call us at (832)540-8465

Rethinking Asset Data

Enterprises count on their asset management system to be the “master” from which all other applications pull data. For most business functions, this works well while providing consistency and control to the enterprise. Still, the fundamental way data is collected, stored, and managed has led to challenges, particularly for maintenance operations. The reason for this is simple – maintenance requires accurate and reliable engineering data, and engineering data is continuously evolving.

Typically, when an asset management system is set up, a moment-in-time snapshot is pulled from the engineering platforms’ document control program, and a record is established in the asset management system. This “as-built” approach fails to account for the fact that as facilities go through maintenance cycles, the engineering data changes. Yet, the asset management system data is rarely, if ever, updated. As maintenance applications pull from the asset management system, the information is often obsolete, which causes delays and errors and poses unnecessary safety risks. The more experienced engineers would often bypass the asset management system’s data in the maintenance application and pull data directly from the engineering systems, especially when the data in the maintenance system just did not make sense. Like, when the specs called for a valve that was discontinued ten years ago, but only an engineer with years of experience would know that. Clearly, this is not sustainable or a best practice.

So, you are right. Your asset management system is broken in regard to serving maintenance applications. So how can we fix it?

There are many approaches to this issue; messy middleware, inaccurate AI solutions, and expensive asset management bolt-ons are all available, but no software will solve the problem. Software does not address the evergreen nature of the engineering data and the workflows involved in ensuring that the data is accurate and reliable.

The answer is not a simple fix; it requires a fundamental change in how you approach the data. It requires digital transformation and the building of an ecosystem between asset management and engineering data systems. The ecosystem needs to allow data to be updated where it was authored and routinely push fresh updates to the asset management system and maintenance applications. Updates made in the maintenance applications need to be reflected in the engineering programs and vice versa in close to real-time.

Approaching this problem can seem insurmountable. It is a difficult ask of IT departments and out of scope for engineers. You need Engineering IT expertise. A process-driven Engineering IT solution with connections, workflows, and stewardship that brings engineering systems, asset management, and maintenance together into an ecosystem will solve your asset management system problems and set your maintenance group up for success by reducing costly delays and errors, as well as mitigating safety risks.

ProLytX is an Engineering IT firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and IT skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, find us at www.prolytx.com.