/   News   /   Streamlined data management

Streamlined data management

/ 21 July, 2010

In both exploration and enhanced recovery, massive amounts of data are critical pharmacy discount.com to the decision-making processes that lead to explorative drilling or field operation optimisation to improve existing recovery rates.

The cost cost levitra lowest and complexity of designing, implementing, and managing traditional direct-attached storage infrastructures can be substantial. And when the needs of the dynamic enterprise inevitably change -often quickly and even dramatically – traditional static systems require re-provisioning. This can cause disruption and increase use levitra management overhead and risk.

To address these issues, IT organisations typically overprovision systems and pre-allocate resources. However, this approach is costly and offers only low price viagra temporary relief from inevitable expense and disruption.

To support seismic processing, interpretation, and visualisation, NetApp delivers a comprehensive data management and storage solution that is said to alleviate the data bottlenecks, reduce costs, and improve characterisation accuracy and technical team productivity.

Using NetApp fabric-attached storage (FAS) systems to store and manage seismic volumes, attribute volumes, horizons, faults, well logs, and other data types offers petro-technical asset teams the highest levels of performance and data protection.

Increased characterisation accuracy

Seismic and other geophysical information assumes many forms and can be useful over the span of many years. Since the introduction of improved seismic imaging and data visualisation applications, average drilling success rates have increased from 20% to over 50%.

Data management solutions must allow easy management and must also leverage legacy seismic information, drilling, and operations data by unifying the storage of that data on one high-availability platform. The technical team probably includes a number of geoscientists working with a variety of desktop applications, and seismic processing systems typically run on UNIX or Linux servers.

The NetApp Data ONTAP operating system takes advantage of multiple processors, providing access to data by using block- and file-level protocols on a single storage system. NetApp systems support all the common NAS and SAN protocols from a single platform for different petro-technical applications, so all users working on any platform can access storage directly. This means that engineers and geo technicians, from one desktop, can access the same underlying Network File System (NFS) and Common Internet File System (CIFS) data stores without data migrations, data copies, or reinstallations.

Any upstream applications can access and manipulate the same raw data to preserve legacy application access and allow any application to work from the same data store. Multiprotocol support also centralises storage, eliminating the need for separate volumes for separate applications. In addition, NetApp MultiStore allows data managers to create separate, completely private logical partitions quickly and easily on a single storage system. Each virtual storage partition maintains absolute separation from every other storage partition to permit different enterprise departments to share the same storage resource without compromising privacy and security.

Eliminate emulation layers

A storage system is required that allows different platforms to share network files but also allow file locking support. With NetApp, the same storage system can support a Fibre Channel or iSCSI SAN in addition to NAS protocols, Furthermore with the flexibility of NetApp storage, fewer storage devices are needed, and each system provides more.

Unify pre-stack and post-stack processing

The initial serial stages of seismic processing involve specifying velocities and data filtering operations to improve the signal-to-noise ratio, which do not require the same data throughput as parallel seismic migration applications.

Because pre-stack and post-stack imaging applications use parallel processing algorithms, they require very high I/O data rates and large high-performance computing clusters, and these benefit more from the NetApp Data ONTAP 8 cluster mode storage solutions. However, eliminating the requirement for moving seismic data from one storage system to another by moving to NetApp Unified Storage Architecture for all seismic processing stages reduces cycle times, improves decision making, and reduces the time to production.

NetApp unified storage eliminates the complexity and data bottlenecks in heterogeneous environments to improve collaboration between geoscientists, while saving the capital and management costs that result from maintaining multiple separate storage environments specific to each application.

Rapidly replicate reference projects

A factor that commonly affects engineers’ productivity is the speed with which they can create a copy of a master or reference project and then start manipulating source files to build geophysical and petro-technical models of the property, creating tens or even hundreds of attribute volumes. Multiple asset team members may need to access the same source data – sometimes at the same time.

To preserve underlying data integrity and eliminate high overhead, a storage system must allow users to generate instant replicas of data sets and storage volumes that require no additional storage overhead. As users create and change reference projects, volumes should be able to dynamically expand and shrink on demand.

FlexVol and FlexClone from NetApp enable creation of attribute cubes against larger data sets and are a simpler data management solution than managing master and working data stores. This approach allows petro-technical teams to create as many reference projects as they require because these volumes demand far less storage space.

Improve remote access

In some cases, an asset team may send an image file to a remote colleague instead of a complete or even partial project file as a workaround for slow Internet connections and less robust remote processing platforms. To improve access to the – real underlying data, access can be provided either by maintaining synchronisation between remote systems and the central project master or by using proxy servers that cache a subset of the data at the remote site.

NetApp provides solutions that complement both caching and replication that you can use to distribute project access to remote sites. NetApp FlexCache creates a flexible caching layer in your storage infrastructure that automatically adapts to changing usage patterns to eliminate bottlenecks created by slow or congested WANs. FlexCache works particularly well for upstream applications in which performance is critical. When you use FlexCache for remote caching, there’s no replication to manage, bandwidth costs are reduced because only the data that is actually needed is transferred, and the latest version of data is always accessible. If you determine that replication is the preferred solution for your needs, NetApp SnapMirror software provides an easy-to-administer replication solution that uses available network bandwidth efficiently by transferring only changed blocks to remote locations.

Use existing storage more efficiently

Traditional disk arrays force overprovision of storage which results in higher costs. NetApp thin provisioning presents more storage space to the systems connecting to the storage system than is actually available. Furthermore NetApp FlexVol technology helps optimise free space to adapt rapidly to the changing needs of exploration and operations team.

Consider how this work in seismic processing and interpretation environment. Typically, a certain amount of storage is allocated for use by each member of an asset team. In terms of utilisation of that space at any given time, probably find that only a subset of the whole team is fully using the space assigned to them. For others, that space remains largely unused. With thin provisioning, developers share storage from a pool; they receive physical (or additional) storage space only when they actually need it. Suppose that 10 engineers are assigned 10TB of storage space each. With standard provisioning, that would total 100TB. With thin provisioning, those 10 engineers could accomplish the same work using half the storage or less without running out of space.

Protect local work

Significant risks exist when workspaces are unprotected. Engineers may replicate source projects and work on them locally for long periods. With NetApp Snapshot, users can create and save up to 255 instantaneous, point–in-time versions of each data volume. Allocating these workspaces on NetApp storage delivers the same or better performance as local storage, and can provide frequent points of protection by using Snapshot. If the need arises, engineers can easily restore or roll back to an earlier version of a file.

Snapshot consumes additional storage space only when changes are made to a volume, so it is highly space efficient. End users can access Snapshot copies to recover files they have accidentally deleted or erroneously modified without involving IT or engineering support personnel.

With optional NetApp SnapRestore software, an entire volume can be reverted to any previously saved Snapshot copy, enabling restore of gigabytes or terabytes of data in a few minutes.

Backup and recover valuable data

Data managers in upstream oil and gas operations can reduce system complexity by eliminating file servers and consolidating data on NetApp FAS storage systems, taking advantage of them for backup and recovery as well. Replicating data repository to a remote facility may also be necessary to protect against site or regional disasters.