Top Six Benefits of OPC UA for End-Users
Thomas Burke, president and executive director of the OPC Foundation, discusses why end-users are discovering the benefits of an OPC UA solution, including reliability, performance, multi-platform support and easy migration.
Welcome to the August 2009 edition of OPConnect, the official newsletter of the OPC Foundation. There have been a lot of things happening in 2009, and it’s been very exciting with respect to OPC Foundation and the OPC vendor community.
One of the most significant things is the recognition by the end-users of the value proposition for the OPC Unified Architecture (UA) technology.
There are six key features that OPC UA delivers to the end-users: The ease-of-use, plug-and-play, high reliability and redundancy, enhanced performance, multiplatform support, and easy migration plan for existing OPC products to the OPC UA technology.
That last feature–easy migration plan–is critical to its success. Even though OPC UA is a revolutionary new architecture, it still provides a rock-solid evolutionary migration mechanism to allow existing OPC products to plug in and get many of the benefits of the OPC UA technology… Read more
OPC UA Interoperability Proves Worth For Pharma
Gary Mintchell, Editor in Chief, Automation World, discusses one of the key features of OPC UA technology–interoperability–and the promise this holds to significantly reduce integration costs across complex systems.
Integrating various components of an automation system can cost up to ten times the price of just the components, depending upon the complexity of the system. This fact alone can destroy the benefits of an integrated system. The latest specification from the OPC Foundation–OPC UA–holds the potential of greatly reducing this cost penalty. All we need is for more suppliers to adopt the specification and make it available to those charged with implementing an integrated automation system… Read more
OPC Server Meets Demand for 64-Bit Systems
New OPC server suites from Cyberlogic have 64-bit kernel mode device drivers, so users can deploy 64-bit systems regardless of the type of network or network adapter.
Cyberlogic has released 64-bit versions of the company’s OPC server suites that deliver an easy migration path for users moving to x64 Windows platforms. The 64-bit versions of Cyberlogic OPC suites provide native 64-bit support, including 64-bit kernel-mode device drivers, for all x64 Windows operating systems, including XP, Vista, Server 2003, and Server 2008.
As the shift toward 64-bit systems is speeding up, there is a growing demand for the 64-bit versions of popular software tools. Complicating the picture is the fact that some of the new 64-bit systems may have to be deployed in existing environments, either as replacements or for expansion… Read more
Integrating OPC Servers and SCADA Systems
Engineers working on a high-security project in Denmark used the OPC DataHub, from Cogent Real-Time Systems, for OPC client connections between equipment at the secure facility and a remote monitoring station.
In a recent data integration project, Siemens engineers in Copenhagen, Denmark, were able to connect equipment and instrumentation running in a high-security facility to a remote monitoring location using the OPC DataHub. The goal was to allow technicians access to the machines they needed to work on, without breaching security or permitting any non-authorized personnel on site.
At first the project promised to be a typical OPC application. The main objective was to connect a chiller unit with an OPC server running at a secure facility to two SCADA systems at a monitoring station, each enabled as an OPC client. However, it soon became apparent that there would be some problems with networking. OPC networking depends on distributed component object model (DCOM), which at the best of times can be difficult to configure and slow to reconnect after a network break. To make matters worse, the OPC server provided by the chiller manufacturer was not up to the task… Read more
Process Analytics Finds Process Problems
In this whitepaper, Canary Labs outlines how companies, by using process analytical software, can gain great insight into their process operations to find problems and improve quality.
Process Analytics and Intelligence–sometimes called Manufacturing Intelligence–has transformed the way companies produce goods, understand their manufacturing processes, and ensure a quality product in ways we could not have foreseen ten years ago. Process Analytics and Intelligence can take many forms–from basic trend charts that highlight a single process variable to complex statistical analysis of multiple variables. Trending process variables allows you to go back in time and see what happened over the past minutes, hours, overnight, or over a weekend. When trend charts show that the data from the system is not typical, you can quickly determine the problem and resolve it before it becomes a major problem.
The cornerstone of any real-time Process Analytical and Intelligence solution is an effective data storage and retrieval capability. In manufacturing applications, data is generated from a multitude of sources, from devices as simple as a weigh scale to as complex as a PLC controlling a high speed bottling line… Read more
OPC Facilitates Increased Capacity
A water treatment plant in Utah doubled its capacity after deploying the System 800xA Extended Automation from ABB, which uses OPC 2.0 technology to gather and display data.
Quail Creek Water Treatment Plant (Quail Creek WTP) is part of Washington Counties’ Water Conservancy District, a political subdivision of the State of Utah. It is a regional water supply agency organized in 1962 under the Water Conservancy Act to develop a water supply for rapidly growing areas in Utah’s Washington County. The District is primarily a wholesaler of water to other agencies. The main role of the District is to develop or purchase water where it is available for its service area. The District is committed to serving its water customers in an efficient and cost-effective manner. The District serves water on a retail basis only when other local providers are not available or do not have facilities to do so. It is dedicated to development of a resource in an environmentally sound manner.
The Quail Creek Water Treatment Plant’s control system was in need of an upgrade in order to fulfill the District’s goals of efficient, cost-effective and environmentally sound operations… Read more
OPC Gets Green, Goes Nuclear and Gets Embedded
Thomas Burke, president and executive director of the OPC Foundation, discusses how OPC technology is advancing cutting-edge developments—from green energy and the world’s largest particle accelerator, to embedded intelligence and OPC UA-on-a-chip.
Welcome to the May 2010 edition of OPConnect, the official newsletter of the OPC Foundation. The year 2010 marks 15 years in the development of OPC technology. What began in 1995 as a task force to standardize on a device driver solution has morphed into the 400-member OPC Foundation, whose task it is to develop, support and maintain standards for moving data and information from embedded devices all the way through the enterprise.
OPC is “open connectivity” in industrial automation and enterprise systems based on “open standards.” These standards operate across multivendor and multiplatform systems, providing secure, reliable and interoperable communications for applications in industrial automation, building automation, energy management and more… Read more
Reaping the Wind in Real Time
The Matrikon OPC Server for Vestas Wind Turbines standardizes real-time access to wind farm data sources for improved management and control of distributed assets.
High oil prices and global warming concerns have more people looking at renewable energy sources such as wind power. Wind power installations have unique challenges due to energy variability and load balancing as well as accessibility of the many generating stations. The OPC Server for Vestas Wind Turbines can provide continuous access to real-time information needed for equipment monitoring and making production decisions… Read more
Upgrade Performance, Downsize Complexity
Softing’s OPC Development Toolkits for OPC UA assist software engineers in implementing effective OPC Clients/Servers and embedding OPC UA technology directly into target devices.
Softing, the world leader in providing conformant OPC development tools, continues to enhance its OPC Unified Architecture (UA) Toolkits for Windows, VxWorks, and Linux. The latest version of the OPC UA Toolkit comes with significant performance optimization, support of Kernel Mode in VxWorks systems, more programming support and functional extensions.
Softing’s Toolkits are currently the only products available that include all necessary OPC libraries to develop OPC Clients and Servers based on the Classic OPC DA, Alarms & Events, XML-DA, and the OPC UA specification… Read more
OPC Helps Cool World’s Largest Particle Accelerator
The OPC Top Server from Software Toolbox monitors chiller data on the Large Hadron Collider at CERN, to cool circuits when proton beams collide at seven trillion volts.
In March 2010 at the international physics laboratory of the European Organization for Nuclear Research (CERN), two proton beams collided with the energy of 7 trillion electron volts in the world’s largest and highest-energy particle accelerator known as the Large Hadron Collider (LHC). CERN, which features the most technologically advanced facilities for researching the basic building blocks of the Universe, built the LHC to test predictions of high-energy physics for particle physicists to help answer some of the most fundamental questions about the basic laws governing interaction and forces concerning elementary objects, structure of time and space, quantum mechanics and relativity.
The LHC results required years of investment in engineering, science and technology. Keeping the technology working reliably requires consistent monitoring of temperature and humidity in the tunnels and experimental areas of the LHC site… Read more
OPC UA Software Opens Up Linux Possibilities
A leading integrator uses the OPC UA server included with Ignition by Inductive Automation™ to provide better performance and reliability in a distillation refinery SCADA system.
Integrator Kyle Chase has begun to experience first-hand the benefits of OPC Unified Architecture (UA). Designed to allow for cross-platform compatibility, OPC UA delivers on the promise of performance and reliability. Chase explained that, although a fan of Linux, until now he could never use it in automation control systems because OPC relied on Microsoft’s Distributed Component Object Model (DCOM).
“To me, the move to a true cross-platform environment is important,” stated Chase, systems integration specialist for Surefire SCADA Inc. “This holds many advantages, especially when it comes to system flexibility and security. It helps keep costs down as well”… Read more
Partnerships: Newest Interoperability Initiatives for Industrial Automation
A quick look at the latest interoperability initiatives such as OPC UA and FDT 2.0, FDI and its implications for FDT, how OPC UA meshes with ODVA, PLCopen, and MTConnect.
While the last decade has been remarkable for the number of automation and control technologies introduced, another market force has produced an effect on the industry much more notable—collaboration among vendors and industry organizations.
The concept of being locked into brand loyalty where an end-user company would purchase all of its devices and applications from a single vendor and its direct partners as a means of guaranteeing that the applications and devices would work together in a seamless fashion has become completely archaic. However, it was commonplace not so long ago.» Cloud Computing and the FDT/OPC Initiative» Machinery Initiative Explained
Oil & gas blending systems benefit from OPC historian, data trending and analysis
Canary Enterprise Historian, Canary Logger, and Trend Link software run independent of the HMI, provide graphical display of trends to optimize maintenance and performance, and synchronizes/archives daily from remote sites.The Canary software installations include the Canary Enterprise Historian, Canary Logger and Trend Link. The ability to configure the historian and logger to run as a service is extremely valuable as well as having the added benefit of running independent of the HMI connecting directly to an OPC server. Remote data logging is done at locations where two servers exist, allowing the use of a single Canary Enterprise Historian.
Randy Walker, control systems engineer, says “Canary Trend Link is a valuable tool allowing us to graphically review archived data for maintenance issues and performance. Templates can be saved of commonly reviewed trends for quick future access. The export utility is used to generate viewed trends into reports for distribution”… Read More»
By Julie Fraser, President, Cambashi, Inc.
Deciding whether you need a manufacturing execution system (MES) is easier than ever, yet deciding how to proceed is more difficult than ever. Sound confusing? It might be—or it might not be.
For about 20 years, people have been asking me: “If my company has modern automated controls and an effective enterprise resource planning (ERP) system, do we also need MES?” My answer is more likely to be “Yes” than ever before. In fact, Cambashi and other industry analysts’ surveys for the past five years or so have showed that a higher percentage of companies plan to buy an MES than most other major application types.
Why does it make sense for so many companies to buy MES? Because they have not yet done so, at least not on a widespread basis. Many manufacturers are realizing that their carefully crafted information flows have a big hole in them—between the automation and the enterprise. That missing plantwide view is key to understanding your ability to fill orders at the time, quantity, and quality the customer requires. The plant is also where you can best measure suppliers. Being in the middle of the product lifecycle, it is where engineering change requests often originate. In fact, our research has repeatedly shown that the performance of plant operations correlates closely to the performance of the business.
There may be some businesses that don’t need to buy MES, but they are the few leaders that have excellent, modern and fully implemented ERP and automated control systems. Even then, whether you need MES is largely a matter of how you define the term, and how you describe your current systems. If the definition of MES is the plantwide information system that guides, tracks and monitors the end-to-end production process, then, as one high-tech industry executive quipped, “Every production company actually has MES—it just might be manual.”
So manufacturing and production companies need MES. Most of them need a software solution to deliver an integrated view of the plant operation that also delivers a means to enforce best practices. It also delivers the information that customers want, just as much as they want the product you ship them. Given that the software in this market has been evolving and improving for over 20 years, there is a good chance that there is a commercially available solution to fit most companies.
If we agree that production needs to be visible, well documented, and tightly controlled, the case to invest in MES is reasonably clear. Yet key questions remain that keep the market confused:
Do we need MES or do we need manufacturing operations management (MOM)? MOM is the term that has become popular due to its use in the ISA-95 standard for the integration of enterprise and control systems that provides a definition of functions and workflows for these plantwide “Level 3” systems.
What does MES mean to our company? How should we scope our requirements for the MES functionality we need?
Will we need a third solution provider for MES? Or can one of our existing solution providers offer us what we need?
Most companies will have their own specific questions as well. These may or may not include the following: How will we form the team required to assess and purchase the software? How do we best research the options? How do we make the business case to upper management? How do we evaluate the options rationally? How do we ensure the project actually delivers on the vision?
Getting to answers
Each company must answer those questions themselves. Yet my experience as an industry analyst in the MES/MOM market for years gives me a perspective to help you set up a framework to answer those questions.
MES vs. MOM: these terms are largely interchangeable, and whether a solution provider uses one or the other should not matter to you. You can find overlapping but different definitions of each term, but don’t waste your time on that. The ISA-95 Model is hugely beneficial in helping you to scope the functionality as well as the integration into your enterprise system. The ISA-95 standard can also help everyone come to a common understanding of what you are pursuing with an MES project.
Operations Management Insights: For more information, visit our updated web site for targeted content related to all things MES and MOM. Visit bit.ly/awops
Defining MES: MES/MOM will certainly mean different things to different companies. In highly continuous process industries, the distributed control system (DCS) may
be the hardware platform for MES. At the other extreme, in manual-intensive operations, the MES may be the primary control system in the plant. The key is to identify what is most critical for the plant to be profitable and to satisfy customers. Requirements must start from that business view, and also come from the detail of how the supervisors and floor personnel must work in that plant’s environment. A useful tool for software product education is Logica’s MES Product Survey. This Guide has a great deal of information about dozens of products. It provides a view of industries, standards compliance, configurability, functionality, and technologies. The Guide also indicates some of the trends in the industry overall. http://www.mescc.com/mes-report.html
Do we need a separate MES supplier? This is the question that has become most confusing. Several of the major automation providers now offer MES, including Emerson, GE, Honeywell, Invensys, Rockwell, and Siemens. Several of the ERP providers offer more MES than the “plant floor module” as well. Most notably, CDC Software, SAP and Solarsoft have bought MES and manufacturing intelligence companies. Now the product lifecycle management (PLM) vendors are in the game too, with Dassault DELMIA and Siemens having made significant MES acquisitions. After that “encroachment” there are still literally hundreds of independent software providers offering MES/MOM in every corner of the globe. Some are regional; others focus on particular manufacturing industries; yet others focus on particular functional areas. It’s fine to have a bias toward the solution from an existing provider, but finding a few independent market leaders in your type of business and comparing them in an unbiased way can lead to a better decision—and more leverage with your existing automation or ERP partner.
Make it easier
The complex set of questions about how to actually move forward is addressed in an excellent independent certificate course. The Manufacturing Enterprise Solutions Association (MESA) offers the Global Education Program with both Certificate of Awareness for MES/MOM Business awareness for advisors and decision makers and Certificate of Competency for MES/MOM Methodologies programs for those who will run the MES selection and implementation programs. Courses are offered in various locations throughout the world on a schedule, and can also be ordered as a company-specific or “in-house” program. For more information, go to: http://www.mesa.org/en/globaleducationprogram/educationprogram.asp.
The evidence shows manufacturers need this plantwide software. Most companies that implement MES find it makes life easier for production employees and makes profitability more predictable for plants. Cambashi has a number of white papers, articles, and research studies that explain various aspects of plant systems available for no-cost download athttp://www.cambashi.com/for-you-to-use.
Because production plants and processes vary far more than accounting, inventory, or even product design processes, the array of solution providers and functionality available can be confusing. To begin your journey, get some education, assemble a multi-disciplinary team and research solutions in use in your type of business. Then ask the right questions of your internal team, the software providers, and your trusted advisors. Whether they are systems integrators, distributors, management consultants, or industry analysts, they should have your interests at heart and most likely have some relevant experience.
Make MES/MOM a cornerstone of your approach to improvement in your plant, your supply chain, your product lifecycle and even your customer relationships. It can be. Each manufacturer must ask the right questions. Then, seek the answers. Asking and answering MES/MOM questions gets easier once you have some of your own experience.
Julie Fraser is president of Cambashi, Inc., the U.S. arm of the industrial-focused analyst/consulting/market research firm based in the United Kingdom.
For more information: Manufacturing Enterprise Solutions Association (MESA) Global Education Programwww.mesa.org/en/globaleducationprogram/educationprogram.asp
Cambashi whitepapers, articles and research studies www.cambashi.com/for-you-to-use
By Gary Mintchell, Co-Founder and Editor in Chief
FreeWave Technologies (www.freewave.com), a Boulder, Colo. manufacturer of reliable, high-performance spread spectrum and licensed radios for critical data transmission, was chosen by Comision Federal de Electricidad (CFE) (www.cfe.gob.mx) for wireless data radio applications such as power consumption and substation monitoring, as well as control and monitoring of power networks.
CFE is the only electric power utility company in México. It generates, transmits and distributes power and energy to nearly 100 million customers and is one of the largest utility organizations in the world with approximately 100,000 employees around the country. It currently is using FreeWave’s FGR2-PE, FGR115-RC and HTPlus wireless data radios in nine of the 16 divisions throughout Mexico.
“Before implementing FreeWave radios, CFE was experiencing difficulties due to distance and maintaining line of site in its communication networks,” said Federico Ibarra Otero, engineer at Ampere, FreeWave’s reseller partner in Mexico. “There were some locations where the monitoring devices were remotely located and required repeaters. FreeWave offered a solution that tackled both the distance and line of sight issues and we have achieved more consistent, reliable and modern links since implementation.”
FreeWave radios are used for Supervisory Control and Data Acquisition (SCADA) applications to monitor and control switch gears, reclosers and power meters. Some of the networks are installed in heavily populated cities, such as Mexico City, Monterrey and Guadalajara, where the RF noise and line of sight can potentially cause major issues with wireless communications.
However, with FreeWave’s network design and flexibility, the radios achieve optimal communication, even with skyscrapers, tree coverage and high RF noise levels. With CFE being the only energy provider of power for the entire country, FreeWave radios also were relied upon for data transmission in a wide range of geographical areas, spanning from the big cities to remote locations in the mountains and desert. Additionally, each FreeWave radio is 100 percent tested for RF performance — from -40° C to +75° C — to ensure reliability in a variety of weather conditions.
“We are very satisfied with FreeWave’s Ethernet models, the FGR2-PE and HTPlus, as they let us manage serial and Ethernet devices at the same time,” said Mario Granados Villareal, engineer at CFE. “This allows us to incorporate all the data coming from the radios to the LAN networks, which not only saves time, but allows our entire communications network to operate seamlessly.”
By Matt Littlefield
It looks like 2012 will prove to be a very fruitful year for both the industrial automation space and LNS Research. This is going to be the first full year of operations at LNS Research and we are very excited to start establishing benchmarks in key areas of interest to line-of-business, corporate information technology (IT) and industrial automation executives. Based on trends over the past several years, we believe the following areas will constitute the biggest trends and areas of investment for 2012.
Quality management— Quality has long been a focus at leading firms, however, the way companies think about quality and the way companies architect IT systems to support quality is often disconnected. In 2012, LNS Research will examine how leading firms are using enterprise quality management software (EQMS) to manage quality business processes at the enterprise level and infuse quality management best practices across engineering, manufacturing, supply chain management, procurement, and service.
Manufacturing operations management— Leading firms are moving away from plant level manufacturing execution systems (MES) to begin managing the entire plant network holistically. These new manufacturing operations management (MOM) systems are leveraging native business process management (BPM) technology and unified data models to reduce the time and cost of implementations, as well as provide a more interoperable and collaborative environment. This interoperability will begin to move both up and down the technology stack, with major interoperability enhancements between enterprise applications like enterprise resource planning (ERP), product lifecycle management (PLM), and plant level systems such as distributed control systems (DCS), programmable logic controllers (PLC) and supervisory control and data acquisition (SCADA) systems anticipated for 2012. (See chart.)
Sustainability—Momentum behind corporate sustainability initiatives at many of the world’s most successful companies will only continue to build in 2012. The pressure is on for these companies to deliver tangible results. It is expected that investment in technology to support these initiatives will continue to grow. Key areas of investment will include: sustainability performance management, energy management, carbon management, environment, health, and safety (EH&S) and safety automation.
Asset performance management— Asset intensive industries continue to face intense pressure to improve returns on assets and operating margins. One of the quickest and most successful ways leading companies have accomplished success in these areas is by improving the reliability of key assets and improving the collaboration between operations and maintenance. This means companies are moving from a reactive to predictive approach in maintenance, which in turn requires improved automated data collection, statistical models for predicting failures, and the right infrastructure to share this information across large enterprise. It also involves companies investing in change management, so that the right metrics, incentives and culture are established to ensure that maintenance and production are on the same page.
Industrial Automation 2.0— LNS Research believes there has been a step change in the automation space over the past several years, requiring an entirely new approach to evaluating the value and deployment of industrial automation. Key technology trends of Industrial Automation 2.0 include the movement towards converged networks and standards, mainly involving the use of unmodified Ethernet.
Additional trends of Industrial Automation 2.0 include the convergence of traditional IT and automation networks, the use of traditional IT security solutions in industrial settings, an increased focus on safety and energy management, and the use of a single network to manage multiple disciplines of automation. 2012 will see continued focus and refinement of the trends included within Industrial Automation 2.0.
If only half of the anticipated growth in these areas turn out to be a reality for 2012, the year could mark a big step forward for the entire industry.
Matt Littlefield, email@example.com, is president and principal analyst at LNS Research, a consultancy based in Brookline, Mass.
Read more Matt Littlefield columns on regulations, safety compliance, business process management and all things enterprise. Visit bit.ly/awcolumn054
Modern touch displays, such as those used on smart phones or tablets, show new opportunities for interfacing with computing devices. Gaming consoles also offer good material for new ways to interact with machines. The conference “Mensch und Computer” (human and computer) in Chemnitz, Germany dealt with these topics, as well as other technologies concerning human behavior towards computer.
More and more complex machines need to be used more easily. New operational concepts are needed, not only for operations, but for commissioning and maintenance as well. It’s worth thinking about new concepts for computing device interfaces.
Operation by touchscreen was the last big step in the human machine interface (HMI) technology. I still remember the first models very well, which required strong impact to accept an input. Since then, the quality of touchscreens has improved remarkably, but those possibilities are far from being fully used in the industrial sector. Compared to most consumer products, most control concepts from automation specialists lag behind. Anyone who has used a computer or smart phone with gesture recognition will want to act the same way on every computer or computer-similar machine. I often find myself trying to “swipe”
an older PC to scroll through a document. A huge potential for machine operation is hidden at this point.
At the “Mensch und Computer” congress, it was interesting to see how often Microsoft’s Kinect console was used as an example. The device consists of a camera, a microphone, a depth sensor and a pedestal with an electric motor that aligns the device with the player. With Kinect, the Xbox game console can be controlled completely without a typical controller. In some lectures, practical demonstrations of what’s already possible with this human machine interface were shown. In a virtual dressing room, the proportions of a person were scanned by Kinect to dress this person in different clothes on a monitor. Another example was the controlling of a helicopter model by a person’s movements. On the basis of this example, the limits of the technology were also apparent: It wasn’t possible yet to make the helicopter fly completely independently.
Learning from gaming
Despite these restrictions, the control concepts of gaming consoles can provide good examples to automation. A producer of robot control, for example, can work on a solution for robot teaching that is quite similar to the Wii’s input unit. One of the lectures at “Mensch und Computer” posed the question, “Is it possible to have the same fun handling business software as it is handling gaming console software?” The answer wasn’t clear. But one thing was clear in Chemnitz: Consumer high-tech products are very inspiring for sometimes too-sober business users.
Maybe it’s time for a congress titled “Mensch und Automatisierung”—human and automation”.
Martin Buchwitz , firstname.lastname@example.org, is editor of SPS Magazin in Germany.
By Barry Young
Several trends have already impacted the distributed control systems (DCS) market and are likely to continue to do so over the next few years. These include both product- and technology-related trends and general industry trends.
The DCS input/output (I/O) subsystem is responsible for inputting hundreds or often thousands of different process measurements and other inputs into the system, and outputting control signals to a large number of valves, actuators, motors and other plant final control elements. I/O represents one of the most significant parts of the DCS, and traditionally, a significant cost element. However, DCS suppliers are working to reduce both the cost and the complexity of their I/O by incorporating more intelligence and programmability into the devices.
Today, in a greenfield plant, most of the I/O supplied is on some type of bus network. Brownfield plants are also installing more bus I/O. However, with the large installed base of traditional 4-20mA I/O, the transition is very slow. Major expansions or revamps in brownfield plants consider bus I/O when the sensors and final control elements are also part of the project.
There is also a growing trend towards adding more wireless I/O and associated field devices, particularly for process and equipment monitoring applications.
As the lines between automation and information technology (IT) begin to blur with increased use of commercial off-the-shelf (COTS) technology, the network infrastructure of the DCS and the network architecture for plant information become increasingly intertwined. End users now often rely on the expertise of suppliers for consulting to set up these networks in a safe and secure manner.
Automation: Overdue for a Revolution: Jim Pinto cites Dick Morley’s push for more peer-to-peer decentralized control systems and how the future would be better served with more thin clients than “fat” PCs. Visit bit.ly/awcolumn_051
DCS suppliers started incorporating server virtualization a few years ago. Common uses of this technology include engineering development and for simulating automation in training. Virtualization is not appropriate for all parts of the DCS. Sometimes, dedicated hardware will perform a given task better than a virtual server. A good example is the real-time process controller in a DCS, where speed, determinism and high reliability are major design considerations for the operation and safety of the plant. On the other hand, a virtual server performing many applications on one box can be a good choice for “offline” applications such as control configuration, simulations and training.
Just as people today find it hard to live without their smartphones in daily life, process operators and production supervisors have increasing reliance on the ability to “access data anywhere, anytime” to perform their job functions. DCS suppliers address this trend by supplying tablet technology for roving operators and using smartphones for alerts and condition monitoring. This trend towards increasing mobility will grow in importance in the coming years.
Move to the cloud
There has been much talk in the industry about developments underway to move selected DCS applications “to the cloud,” a reference to moving applications to remote, Internet (public)- or intranet (private)-based servers. However, the control automation industry is very conservative by nature, and for the time being this is just talk. ARC believes that, ultimately, selected DCS applications are likely to move to private, and in some cases, even public “clouds”; but for now, end users are wary.
More process units these days are built and delivered on skids, rather than built in situ. As a result, DCSs are showing up on the skids when they arrive at the plant or mill. Unless there has been good coordination upfront between the equipment supplier and the user’s automation team, the skid-mounted DCS technology can be different than the desired system for the plant. Heterogeneous DCS solutions require additional communication interfaces and significant increases in engineering services.
Barry Young, email@example.com, is principal analyst at ARC Advisory Group in Dedham, Mass. ARC will be focusing on several of these important trends at the upcoming ARC World Industry Forum www.arcweb.com in Orlando, Florida, Feb. 6-9, 2012.
Why Amazon.com Is the Cloud-Computing King
By Evan Niu (TMFNewCow) |
I’m not referring to the meaning of life, mind you. I’m talking about the annual list of the Top 500 supercomputers in the world. When you look at that list, Amazon.com‘s (Nasdaq:AMZN ) virtual supercomputer built using its Elastic Compute Cloud, or EC2, ranks No. 42, according to a recentWired report.
The reason that’s such a feat is that Amazon’s virtual powerhouse is in the clouds and its raw processing power is decentralized and spread throughout its global network of data centers. This contrasts with the old-school approach of calling up Cray (Nasdaq: CRAY ) or Penguin Computing and ordering a multimillion-dollar machine, similar to what the feds just ordered sporting NVIDIA (Nasdaq: NVDA ) andAdvanced Micro Devices (NYSE: AMD ) chips or this one using only ARM Holdings(Nasdaq: ARMH ) -based NVIDIA chips.
Cycle Computing is a small company that helps researchers and businesses tap into EC2’s supercomputing power, and CEO Jason Stowe is naturally a big proponent of cloud-based processing. Stowe believes that while there is still a place to having one’s own dedicated supercomputer, those days are numbered as cloud-based supercomputing is able to increasingly satisfy what the market needs.
Amazon provides an option that is more affordable and can handle most things thrown at it. For example, Cycle helped set up a virtual supercomputer running 30,000 cores on EC2 for about $1,279 per hour. That may sound like a lot to the average user, but it’s chump change when compared with the alternative a researcher or business would face, which Stowe details:
If you created a 30,000-core cluster in a data center, that would cost you $5 million, $10 million, and you’d have to pick a vendor, buy all the hardware, wait for it to come, rack it, stack it, cable it, and actually get it working. You’d have to wait six months, 12 months before you got it running.
The takeaway is that even though Amazon’s solution doesn’t top the nosebleed horsepower of the No. 1 supercomputer, Japan’s K Computer, which is almost 44 times as fast, it offers what will satisfy what many entities need and does so at a fraction of the monetary and time expenditures.
It’s no wonder Amazon is the cloud-computing king.
The mobile revolution and cloud computing are fundamentally linked, and the mobile revolution is set to become The Next Trillion-Dollar Revolution. There are lots of companies that are set to cash in on it, but one in particular has excellent prospects. The company is one of few players that will help power the mobile devices of the future, and it also has exposure to the explosive growth in China. I’m so bullish on the stock that I’ve given it an “outperform” CAPScall. Get access to this 100% free report to find out what company I’m talking about!
The tiny computer, which runs Linux on an ARM processor and sports USB, audio and video out, as well as an SD card slot, was designed to be an ultra-low-cost computer aimed at children.
In a blog post picked up by Business Insider this week, its creators noted that the machine will be available in January following some additional testing on the hardware and software.
At launch the diminutive machine will be offered in two configurations, one at $25 and the other at $35. The extra $10 gets you double the RAM at 256MB, as well as the addition of an Ethernet port for getting online. Its creators have also announced the “Gertboard,” a small expansion board that can be added to the Rasberry Pi. Its purpose is to “flash LEDs on and off, drive motors, run sensors and all that other fun stuff.”
The computing project is the brainchild of game developer David Braben, and follows in the footsteps of previous low-cost computing initiatives like One Laptop per Child, which aimed a $100 price tag for Internet-ready laptops. There was also last year’s $35 tablet in India, which ran Google’s Android OS.