281-307-3012 info@prolytx.com

RESOURCES

We believe that a rising tide raises all ships, so we work hard to share knowledge that educates and informs. We want to lead a better future for the industries we serve, making facilities and their operations more efficient and safe.

 

The Wild Catters Power Hour

Guests, Mike Antosh and Blake Biernacki explore the benefits of digital transformation and the use of innovative technologies in modern organizations. Recorded May 11th, at 10 AM. CST
LISTEN NOW

The Wild Catters Podcast

Collin with the Digital Wildcatters sat down with Blake Biernacki and Mike Antosh from ProLytX - Engineering IT to chat about the work they are doing to bridge the gap between major engineering companies and owner-operators in the O&G industry by bringing together engineering, IT, and consulting services to increase operational efficiency and functional safety.
LISTEN NOW

The Inherent Challenges of Digital Twins and Advanced Analytics

As a baseline, owners of industrial facilities require comprehensive data throughout the lifecycle of their plants. However, the industry-wide shift towards data-centricity, digital twin, and advanced data analytics requires more than merely a technological evolution but a foundational change. The advanced analytics that owners desire are not possible with deficiencies in foundational data. The conventional model, relying on Engineering, Procurement, and Construction (EPC) entities for data management for operational facilities, presents an inherent conflict as data ownership and management are in the EPC’s hands, but there are often multiple EPCs working on a given project. As a result, the industry grapples with the issue of data silos, where valuable information remains underutilized and frequently inaccessible. While larger EPCs have begun adopting data-centric applications, there is still a gap with smaller entities who have been slower in making this shift. System agnostic, third-party service providers, like ProLytX, can facilitate the utilization of engineering data from multiple EPCs or that sitting in antiquated document management systems to help owners leverage it for the operations and maintenance of their facilities transparently and efficiently.

The prevailing practice of putting equipment data in an asset management system and engineering drawings/lists in a document management system leaves owners struggling with data availability and trustworthiness. Here are a few things to consider:

  • Asset management systems house only a fraction of the overall equipment data. This is by design, as the consumers should reference the engineering data systems to get the full background for the asset. However, these systems are rarely well synchronized.
  • Document-centric approaches result in duplicated and fragmented information that is difficult to manage. You can scrape and hotspot documents in an attempt to make them more intelligent, but that has limitations.
  • Building a solid data foundation is crucial for unlocking the potential of the digital twin, but it is no easy task. Data-centric engineering applications (3D models, smart drawings driven by a database, cloud-based engineering tools, etc.) are the key to this foundation. These applications require a collaborative effort from administrators, developers, engineers, and designers.

Technology providers are doing their part to address the longstanding gap and steering the industry to a more progressive data-centric paradigm. Intelligent design and engineering systems like Hexagon Smart Instrumentation (formerly INTools) and Smart P&ID emphasize housing and utilizing the data in its authoring system rather than perpetuating document-centric practices.

Essentially, the conflicts inherent in traditional data management models are being addressed as owners and EPC entities transition to more intelligent design systems. ProLytX is trying to empower the industry with comprehensive data and steer it towards a more data-centric and analytically-driven future as we are committed to helping companies achieve the digital twins and advanced analytics they desire.

The Origin of Engineering Technology

In the dynamic realm of engineering technology, the landscape has transformed significantly over the years. In a recent podcast featuring Blake and Mike, we got an insider’s look at the evolution of tech in the industry, shedding light on the shifts from traditional paper-based processes to the cloud-centric approaches we see today.

VIDEO LINK

Back in the analog era, engineers wielded pens and rulers, drawing intricate plans on drafting tables. Then came the digital revolution in the ’80s, with the introduction of AutoCAD and the widespread adoption of personal computers. The transition marked a pivotal moment as the industry embraced newfound efficiency.

Venture into the early 2000s, and the intersection of engineering and IT took center stage. In this technological tug-of-war, servers and software licenses became integral components of the landscape. Bridging the gap between these two worlds were the unsung heroes — Engineering Technologists. Blake and Mike emphasized their crucial role in translating the language of IT for engineers and vice versa, facilitating smoother collaboration.

The global project era does not come without challenges. As teams spanned the globe, technologies like Citrix facilitated remote collaboration, but not without hurdles. Enter Active Directory, adding another layer of complexity to the tech ecosystem.

Looking to the future, the landscape presents new challenges. Cloud technology, integration complexities, and the intricate dance of APIs are on the horizon. The key to success will undoubtably be a strategic focus on AI, customized solutions, and a commitment to staying at the forefront of technological advancements in the ever-changing landscape of engineering technology.

In summary, the journey from traditional paper-based methods to the cloud era is a tale of innovation, the vital role of engineering technologists, and the steadfast support of industry leaders like ProLytX. ProLytX specializes in optimizing engineering applications, managing digital twins, and ensuring seamless operation of industrial systems. As we navigate the future of engineering tech, ProLytX plays a pivotal role in navigating the intricate technological landscape, acting as the guide for organizations looking to stay ahead in the evolving world of engineering tech and leading through the exciting possibilities that lie ahead.

The Truth About the “Easy Button”:

How to Accurately Migrate Massive Amounts of Engineering Data in Minutes

Data is an asset, and operators know that digitizing and moving legacy engineering data into company-managed platforms and maintaining quality data-centric systems in-house is necessary to ensure competitive advantage and mitigate risks. However, digital transformation can be a tedious and arduous process, not for the faint of heart. The problem is that legacy engineering data is typically not structured in neatly labeled columns and rows or rich with metatags as it is in many other domains. Instead, engineering data can be in the form of PDFs, CAD files, Excel workbooks, Word documents, napkins, and other ambiguous formats. In most cases, these unstructured documents or renderings have little or no metadata at all.

To add to the confusion, not only does legacy data come in various file types, but even within the same file type, there could be numerous formats for the same asset. Consider what happens when digitizing data for current assets, such as control valves. A facility could have hundreds of control valves installed within the facility by several different vendors or contractors, each with their unique way of classifying and labeling the specifics of the valve. This leads to the same data point for each valve being labeled differently across the specification documentation. One may call a data point “type,” the other may call the same data point “material,” and yet another might call it “metal.” Before this data can be migrated to digitize the facility, it must be formed, mapped, and rationalized. 

There are traditionally two approaches to solving data rationalization: manual and automated. Unfortunately, these approaches have their flaws in the outcome and the way progress is measured. Meaning, metrics are set to assess how quickly the data moves from one system to another with no appreciation for the time it takes to properly tag and map the data.

First is the laborious and costly approach of hiring an engineering company to rationalize and migrate the data manually. Of the two traditional methods, this results in the highest data integrity.  However, this approach is slow, and the interruption to business can be drawn-out and frustrating taking months to complete. Additionally, because this is a lengthy manual effort with highly skilled resources, it is often cost-prohibitive.

The second is an automated approach. Automation can rely on A.I. to rationalize and migrate the data. Unfortunately, automatic data migration is often done by well-meaning software vendors and I.T. consulting companies not trained on the uniqueness of engineering data, who will naively oversimplify the amount of effort to digitize legacy engineering, facilities, and asset data. They have little understanding of the intricacies of engineering data and believe that applying the same approach as they would for other domains would work. They will say, “it is easy, we will put it in a data lake, then A.I. and Big Data programs will do the work.” But is that true? Technically, yes, the data may be there, but can you find it upon search? And, in the cases of documents and drawings without metadata, the difference between putting them in a data lake and a black hole is negligible. Additionally, the lack of engineering-specific knowledge prevents these vendors from being able to see issues upon review. The outcome is low data integrity resulting in low trust and increased risk. The cost is lower, but the resulting clean-up of such an approach is expensive, with overall costs exceeding that of the more laborious manual approach.

There is another approach that marries the two traditional methods. It utilizes technology to automate the transfer, but engineers to rationalize data and map the outcomes. The result is a migration with high data integrity at a cost far lower than the traditional manual approach, with brief or no business interruption. The secret is in the upfront mapping and rationalization. By assessing each of the data formats and intelligently tagging and mapping each to the new digital environment with sample testing along the way, the result is quality data that is trustworthy.

The pressure to measure success by how quickly the migration starts and judging the progress of the migration and not rationalization are mistaken. Patience is key. The adage, “measure twice, cut once,” applies. Invest the time in intelligently mapping the data, and the migration should be as simple as hitting an “easy button.” We recently moved over 5.6 million cells in less than 18 minutes for a client, but it took three months to map and verify. The client is not only thrilled with the high quality of data now available to its engineers and key stakeholders, but the project came in under budget and sooner than promised.

Data migration indeed can be as easy as hitting the “easy button.” ProLytX is an Engineering I.T. firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and I.T. skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between I.T. and Engineering, find us at www.prolytx.com.

Say Goodbye to Proof Testing

One of the biggest and most costly challenges of plant maintenance is proving that systems are functional and safe. Proof testing is one of the most critical aspects of plant operations and costs millions of dollars in lost production every turnaround in the name of safety and compliance.

Even though we have come a long way in how we monitor the operations of facilities, documentation, reporting, and analytics remain problematic. This is because information lies in disparate systems. Instrumentation data from the SIS and DCS are not easily accessible and are typically low-resolution by the time they hit the DMZ or business LAN; as a result, valuable trip information is left uncaptured or hidden, and the entire proof testing cycle continues. As a result, companies are currently relying on manual testing and teams of technicians to manage this in the critical path. With digital transformation and an analytics engine to drive analysis and better reporting, plants can take advantage of trip and shutdown information to reduce test crews and streamline offline testing.

A general lack of visibility means that the entire proof test cycle must run its course when it comes time for routine maintenance. Shutdowns that just occurred must be simulated again from the field for validation and sign-off purposes. If there were better analysis and reporting, the testing could be optimized, thereby reducing the critical path and saving hundreds of thousands, if not millions, of dollars every turnaround. ProLytX has partnered with industry majors to tackle this problem.

ProSMART (System Monitoring & Analytics in Real-Time) software can understand how an SIS/DCS is configured and perform high-resolution data capture and real-time analytics and reporting. This disruptive technology is transforming the way plants look at and schedule proof testing and even instrument maintenance. ProSMART removes work scope and time-consuming tests during turnarounds by automating manual reporting tasks and data collection. It also provides a dashboard for leaders to manage risk, approve reports, schedule testing, and identify bad actors.

Benefits include:

  • Real-time compliance with IEC61511
  • Automated reporting and identification of successes / failures
  • Reduction of costly testing during turnarounds
  • Improved real-time risk management
  • Cross-platform communication spanning a majority of SIS/DCS platforms

Recently, a facility using ProSMART identified $1.5MM in turnaround cost savings due to a reduction in SIS Proof Test requirements and 3rd party technician crews. During the plant shutdown, the operator captured enough data to eliminate over 20% of the planned maintenance testing scope.  Automated reports were validated by 3rd party auditors and found to be more than sufficient.  Accurate valve stroke time data and instrumentation bad actors were captured, which also allowed for targeted maintenance activities.
 
ProLytX is an Engineering I.T. firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and I.T. skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between I.T. and Engineering, find us at www.prolytx.com. 
 

Unrestricted Risk

When an operator considers cybersecurity, they tend to focus on Legal, HR, and Finance data. This makes sense; intellectual property and personal information security are essential for competitive advantage, and Dodd-Frank, SOC, and FTC regulatory compliance also come into play. But, what about facilities and engineering data? As the hosting of major Capex projects shifts from contractor environments to owner-managed environments, the access required to collaborate with third parties is already accessible through engineering applications; so, what can be gained from securing it? The better question is, what can be lost by not?

A while back, in a meeting with a client, we were discussing the vulnerabilities of remote access to engineering solutions. The client was sure that their engineering applications were 100% secure. To prove it, he provided a guest log-in to their engineering applications hosted environment during the meeting. Within minutes, our team had access to one of their most widely used application’s backend data and environment details. A few moments longer, and we had made it on their internal business network. They were stunned. This could have been a significant safety risk had these applications been hosted in part on their PSN.

So why is no one looking at this type of access as a security breach? For one, the objectives are mostly honest; the perpetrator’s goal is not typically to sabotage or steal, but rather for efficiency to their projects. After all, the data is project information and made available in the engineering application anyway. However, it is not the access to the data that is a concern; it is how the data is accessed. Problems occur when the user bypasses the engineering applications to gain entry into the backend databases. This access to the database can allow contractors to streamline their workflows or practices. This is good, maybe? But don’t let these benign intentions distract from the inherent risks.

There are several potential problems with failing to address security in these engineering application-hosted environments.

Direct and unauthorized access to the database undermines the safeguards built-in to application workflows, by-passes approval processes, and activity tracking.

  1. There is no way to guarantee that everyone is operating with the best intentions.
  2. There is no way for a user with knowledge of only one project to fully understand how data is being used throughout the facility on other projects.
  3. Changes to the underlying data can be disastrous and expensive even when the intentions are virtuous.

With today’s remote work environments and large data projects, it is common for operators to work with several contractors across multiple projects at any given time: all of them accessing the same information. The impact of changing a single data point could have devastating and costly repercussions to others, like a catastrophic failure or tens of thousands of man-hours to fix an issue. Breaches don’t have to be terroristic or competitively motivated to be threatening. They generally come from engineers merely trying to simplify their work by venturing into a restricted area without a sign or lock on the door.

By preemptively implementing an engineering application-based security solution in hosted environments, companies can prevent unwanted access, maintain better data integrity, and minimize the risk of costly mistakes and the potential consequences that are far more costly than a security solution.

ProLytX is an Engineering IT firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and IT skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, find us at www.prolytx.com.

Safer Through Automation

 Everyone in an oil and gas facility understands that safety is the highest priority, yet even with regulatory mandates and safety standards there is still room for improvement.

There have been 130 incidents at chemical facilities reported over the past 20 years1. Between, September 1995 and January 2015 there were approximately 128 fatalities and another 218 injuries at oil and gas refineries across the U.S. Of those who died, approximately 60 were a result of an explosion or fire, 35 either accidentally fell or were crushed by heavy equipment, 15 died of asphyxia, 2 were electrocuted, and 7 passed of natural causes typically, cardiac arrest.2 
Operating companies go to great expense to avoid such accidents, in fact, safety is one of the largest expenditures for an operating facility between extended downtime and labor costs for testing. Functional safety testing is usually built into annual, semiannual, or triennial maintenance test schedules. When the facility is off-line for maintenance and testing, it can cost the operator in excess of one million dollars per day in lost revenue and overhead expenses. This significant investment is a testament to the importance put on safety by corporations.
 
It goes without saying that operators are looking for a way to reduce functional safety test costs without increasing risk. The challenge is that the usual testing process is rigorous and manual, riddled with redundancy and ripe for human error. Often because of the complexity and costs, some routines are reduced to minimalistic tests of cause and effect. This simplifies the testing process but may not perform adequate testing and documentation. Some of these might meet regulatory standards, but they are not the most thorough process and often fail to account for all of the potential consequences, and in the unfortunate event of an incident, documentation can be the difference in favorable and costly investigation outcomes.
 
Maintaining the safest facility possible and regulation compliance relies on accurate and thorough documentation. However, documentation can be a point of weakness. The more thorough the test, the more complex the documentation. There can be thousands of pages of procedure to test a single system, so it is a challenge to be consistent and to keep documents up to date. On the other hand, when documentation is poor or incomplete much reliance is put on individual interpretation and the experience of the tester resulting in tests being repeated differently and affecting the quality of the tests. All of these are potential points of failure that increase risk and liability.
 
Operating companies are looking for solutions to improve overall operations and many are investing in digital transformation and automation. But, while there have been solutions introduced that aid in training the testers with the use of digital twins and simulation, there are few solutions that leverage the power of technologies to automate and comprehensively improve functional safety (validation) testing and eliminate systematic error. Of those, only one is fit-for-purpose, Test Drive by ProLytX.
 
Unlike other testing solutions, Test Drive is vendor-agnostic and is compatible with most Program Logic Controllers (PLCs). It can be deployed as SaaS or on-prem. The solution removes human error by taking a templated approach, yet still allows for human expertise and oversight with a people-approved process. The engineer-designed user interface is far superior to the multi-system, multi-document traditional methods bringing everything into a single view.
 
Beginning the Test Drive implementation process starts with a documentation audit and IEC61511 third-party review so that any issues with design interpretation and internal bias is removed. Then by establishing a repeatable automated testing procedure with consistent and accurate documentation outputs, the rigorous and repetitive process of regular maintenance and Management of Change (MoC) testing can be completed, and regulatory reporting obligations met without worry to the operator.
 
The real benefit of Test Drive to the operator is not only knowing that their facility has optimized safety for its workers and the environment, but that the solution has improved overall testing rigor while shifting functional safety testing off the critical path due to the efficiency of automation. This can reduce the testing process from weeks to days, potentially saving the company millions.
 
1 According to The U.S. Chemical Safety Board (CSB), an independent, non-regulatory federal agency that investigates major chemical incidents.
 
2 Article by Jim Malewitz, published by the Texas Tribune in partnership with the Houston Chronicle on March 22, 2015 and includes information from OSHA records, government investigation reports, newspaper archives, and legal filings.

ProLytX is an Engineering IT firm based in Houston, TX, and is a leader in this field, coaching clients to success with a unique combination of engineering and IT skills.  If you want to learn more about ProLytX and how we can help you bridge the gap between IT and Engineering, find us at www.prolytx.com.