Oracle View

Reading the Mind of Oracle 8

Stephen Brobst & Robert Funck

Oracle8's object extensions get most of the attention, but its big set features for parallel performance and VLDB management are just as eye-opening.

The long-awaited Oracle8 is finally here, and it's positioned to attack legacy systems replacement through enhancements to Reliability, Availability, Serviceability, and Recoverability (RASR) and performance characteristics for very large databases (VLDBs). At the same time, Oracle has positioned the product for next-generation applications by adding significant object/relational extensions. The framework for these technology innovations is linked to Oracle's big push to establish its Network Computing Architecture (NCA). In this piece we'll focus on the VLDB features of Oracle8.

The Motivation


Oracle has long been focused on scaling its RDBMS to meet the requirements of enterprise computing environments. The evolution from a mid-range, distributed-systems emphasis to full-scale, mission-critical systems implementations has required Oracle to behave a lot more like a mainframe database and less like a Unix database. Don't get us wrong--there is no doubt about Oracle's commitment to open systems; behaving like a mainframe database means delivering a lot more in the area of RASR for VLDB implementations. As Oracle databases continue to grow, the need to provide a robust architecture with manageability tools that support continuous availability requirements has become paramount.

VLDB is a constantly changing category. It used to be that a huge Oracle database was measured in hundreds of gigabytes. However, the rapidly declining cost of disk space, along with the increasing importance of information in competitive business environments, means hundreds of gigabytes are now the rule--rather than the exception--in large corporations.

It's not just size of the data that is increasing; the number of users deployed against these VLDBs is likely to increase by many orders of magnitude as well. Widespread access will be provided directly to clients in the consumer and business-to-business commerce areas.

NCA is designed to provide a foundation for platform-independent computing in desktop-computing environments. Oracle built its reputation on providing portable database solutions across any server platform. The NCA--in which client independence and (transparent) portability is achieved--is the next logical step in this evolution.

Why is the NCA important from a VLDB perspective? The bottom line is that it allows Web-enabled applications to be easily and quickly deployed directly to customer sites; the number of online users in today's databases designs (hundreds or thousands) will be dwarfed by that of the potential users in networked commerce environments (tens of thousands). Moreover, the thin-client model of the NCA will move application workload and complexity into centrally managed databases and transaction-processing (TP) monitors, further implying VLDB deployments.

Data Partitioning vs. Workload Partitioning


One of the most critical (and much awaited) aspects of Oracle8 is the implementation of table partitioning. High-end, parallel implementations such as Informix, Sybase, Teradata, and DB2/6000 Parallel Edition use static data partitioning (usually with hash partitioning on columns specified by the DBA) in which parallelism is exploited by assigning the content of each partition to its own specific database instance for processing. Thus in most cases, the granularity of parallelism is somewhat tied to the number of partitions defined. The idea is that the hashing function should provide a good "randomization" on the partitioning key such that data (and therefore workload) is divided evenly across the parallel database instances.

In Oracle8, the concepts of data partitioning and parallel workload assignment are two (mostly) separate issues. Data partitioning in Oracle8 involves a key-range splitting approach in which the DBA defines the number of partitions to be constructed and the range of key values to be assigned to each one. Like the hash partitioning in competitive databases, Oracle8 data partitioning is static with assignments explicit in the DDL for each table. It is possible, however, to use alter table commands to split or combine existing partitions.

The primary goals of Oracle8 table partitioning are increased manageability and availability, as well as some performance benefits that will be discussed later in the article. From a manageability and availability perspective, table partitioning for VLDBs is critical for maintenance activities that can be localized; it provides isolation from other table partitions with execution times proportional to the size of a single partition rather than the table as a whole.

Maintenance activities--such as index rebuilds, defragmentation of tablespaces, and backups--can be performed on individual partitions one at a time or in parallel. For a large table, performing these operations on an individual partition basis makes scheduling and allocating resources much more manageable; these tasks can be performed incrementally so that a huge table maintenance operation doesn't have to be tackled all at once.

Another type of maintenance activity that is drastically simplified by key-range partitioning of large tables is the execution of date-dependent purge operations. These operations are very common in operational as well as data warehouse environments where older records are purged from the database while the more recent data is retained. Most systems keep a rolling table comprising n-months of transaction detail (retail point-of-sale transactions, insurance claims, package shipments, financial purchase and redemption transactions, call detail records, and so on), dropping the oldest partition every month to make room for the upcoming month of transactions. Until now, the only major database that provided this capability was DB2 for MVS, by using key-range partitioning based on date columns. Now Oracle8 provides this capability as well.

A column-hashing scheme for table partitioning does not enable easy maintenance of a rolling history table of transaction detail because the transaction records will be spread randomly across the partitions. As a result, mass-delete operations applied against all partitions (usually with significant logging overhead) are required to eliminate older transactions from a hash-partitioned detail table, as opposed to simply dropping a partition in the key-range partitioning approach. Similarly, adding a new period requires transactional inserts that affect all partitions in a hash-partitioned table, unlike the localization to a single partition that occurs with a key-range partitioning scheme.

Proponents of the hash-partitioned approach are quick to point out that key-range partitioning almost always results in "hot spots" because users typically access the most recent data more frequently than older data. Thus, if key-range partitioning is used with date ranges (which is usually the case for detail tables), whichever partitions contain the most recent data will be accessed more frequently than partitions with older data. These hot spots can cause performance problems because of uneven processing among the database instances when one partition has all the data of interest to a query.

Oracle8 addresses this issue through dynamic workload distribution that is not constrained to boundaries established by the static partitioning of data in a table definition. Oracle8 uses a shared-disk implementation model, so every database instance can access any block in the database independent of table-partitioning boundaries; workload is easily distributed based on whichever instance has available bandwidth for processing requests. In an MPP environment with local-disk attachments, Oracle8 will use an affinity map to assign workload to local database instances whenever possible. However, in a large decision-support query with many participating instances, when one instance is finished with its locally assigned set of ROWIDs, it will become available to assist other instances that may be overwhelmed. Data skew problems in access frequency are thus addressed by dynamic distribution of workload to whichever Oracle instances have available bandwidth. (This architecture existed in the Oracle 7.3 release.) The important point is that the implementation of dynamic workload partitioning across Oracle instances sits on top of the static data partitioning architecture, but is in no way tied to any constraints related to static assignment of data to table partitions.

Data partitions and the way in which workload is (dynamically) partitioned to database instances are not tightly coupled. Each Oracle instance will have many query servers working in parallel that may be assigned to work within the same partition or across different partitions, and may access either local or remote data blocks. In an SMP environment with a single Oracle instance (sans Oracle Parallel Server), the many query servers within the instance will similarly share workload across the many partitions in a large table and not be constrained by partition boundaries when processing a large DSS query.

Some operations in Oracle8, however, specifically abide by partition boundaries in their workload distribution. In particular, DML statements for mass insert, update, and delete operations are parallelized only across table partitions, not within one. For example, if a table is key-range partitioned by date using monthly partitions, and a statement is issued to delete all records with a specific range of dollar purchase amounts within the last month, the delete will be executed serially within the partition. Note that the amount of work performed will be proportional only to the size of the partition(s) that overlaps with the specified date range because any partition with a date range outside of the dates specified in the where clause will be automatically excluded from query processing.

Index scans are also parallelized only across database partitions. Prior to Oracle8, DML operations and index scans were not parallelized at all at the database level. Oracle8 now provides parallelization aligned with static data partitions similar to what is implemented in competitive database products using a shared-nothing data model. Note that the implementation of insert, update, and delete require local indexing (as opposed to global indexing) on the relevant table in order to deliver parallelized execution. Local indexing is implemented in current releases of Informix, Sybase, Teradata, and DB2/6000 Parallel Edition and is almost always the strategy of choice in decision-support environments. Global versus local indexing will be discussed in more detail later.

Distributed Lock Manager


The Distributed Lock Manager (DLM) component of Oracle is specific to Oracle Parallel Server implementations that involve multiple database instances executing on distinct nodes in clustered or MPP environments. The DLM coordinates multiple Oracle instances when providing access to data blocks in a shared-disk data model. The DLM ensures that two Oracle instances do not try to simultaneously update the same data block, and that any read request for a data block is serviced with the most up-to-date version of the data block (within the definition of Oracle's read consistency model). In a shared-nothing data model, in which I/O to a particular block is guaranteed to be issued only by a single, statically defined database instance, there is no opportunity for data integrity issues related to multiple concurrent reads and writes because only one database buffer cache can have possession of any particular block in the database.

To support dynamic workload distribution in which any database instances can work on any data block, the DLM must participate to ensure coherency in the I/O operations issued by each Oracle instance. In a decision-support environment, this requirement is rarely an issue because most workloads are heavily read-oriented and coherence of the data blocks is not relevant. In data warehouse implementations, we usually recommend making core tablespaces read-only to avoid lock-manager activity altogether. However, in OLTP environments, frequent insert, update, and delete activity is a way of life. In an Oracle Parallel Server environment, this situation often leads to the "block-ping" scenario in which Oracle instances will compete for ownership of a frequently accessed block (especially index block headers). Block-pinging severely impedes lock-manager traffic as well as I/Os because the desired block is repeatedly flushed from the database buffer cache to disk from one Oracle instance, only to be read from disk by another instance requesting ownership or read access for the block. Thus, what would normally be fast access to a frequently-used block in the database buffer cache or an Oracle instance becomes multiple disk I/Os costing thousands of times the latency of a memory access.

Oracle8 may not make block-pinging go away, but it does take important steps to reduce its impact. First of all, the DLM is now integrated directly into the database engine; in previous implementations of Oracle, the DLM was a port-specific component that fell into the purview of solutions vendors implementing on their own platforms. A generic DLM has been widely distributed over the past few years to hardware vendors as a starting point for porting activities. The integration of the DLM directly into the engine is a logical extension of this activity.

The integrated DLM has many advantages. First, fewer context switches are required of the database in handling lock-manager requests, so overall efficiency increases. It also allows sophisticated algorithms such as shadow-lock lists to reduce the overhead of lock-manager requests. For most applications, these innovations reduce latency time for these requests by 20 to 30 percent. The integrated DLM has been retrofitted to Oracle7.3 on the NT platform, so support for multi-node NT clusters with failover capability using Oracle Parallel Server will not have to wait for the full implementation of Microsoft's Wolfpack.

Another important advancement in Oracle8 related to the block-ping issue is the provision of reverse-key indices, in which the least significant bit of an index is placed on the left side of the index value. The result is that index entries corresponding to values differing by a single bit will now be in separate data blocks. This enhancement is important; in OLTP applications implemented with Oracle Parallel Server 7.x, monotonically increasing key-value inserts into an index frequently creates hot spots in which block-pinging occurs against index block headers. In Oracle8, this particular form of block-ping will disappear because separate blocks will be accessed by the multiple Oracle instances when inserting sequentially-generated key values into an index on a large table.

Performance Enhancements


The optimizer in Oracle8 has gotten a lot smarter, especially for indices. One area of particular focus is that of decision-support queries. Parallel query capability with multiple query servers cooperating in the execution of a single query was first introduced in Oracle7.1 in late 1993; around mid-1995, star-query execution using a Cartesian product on dimensional tables became a recognized optimizer plan in Oracle7.2. Then, in 1996, bitmap indices were introduced in Oracle7.3. Oracle8 brings all of these techniques together in an advanced star-query capability with higher performance and better flexibility than in Oracle7.

In Oracle7, star queries are implemented using a Cartesian product (a join using all combinations of all qualifying rows from the participating tables) of the dimensional tables with subsequent access to a concatenated index in the fact table. Although bitmap indices are supported in Oracle7.3, they are not used in star queries; parallelism is also unused in most executions of a star query in Oracle7. One of the problems with the Oracle7 approach is that the use of concatenated indices is expensive because all combinations of concatenated indices corresponding to subsets of dimensional tables that will be accessed must be created.

For example, suppose we have a large fact table of retail-purchase transactions with small dimensional tables that describe stores, products, and dates of purchase. In Oracle7, a concatenated index on the fact table would be created using the date, store_id, and product_id for selective access subsequent to taking a Cartesian product on the qualifying rows of the dimensional tables. However, to support access to the fact table when only a subset of the dimensions are of interest for a query (let's say we want to know the revenue from the shoe category in January, but we don't care about specific stores), we must build subsets of the columns into concatenated indices (date and product without store, for example). With a large number of dimensions, the number of combinations of concatenated indices gets out of control very quickly.

Oracle8 fixes this problem by building indices on individual columns; the optimizer uses each individual index that makes sense for query execution. Moreover, these indices are implemented as bitmap indices. The Oracle8 optimizer will rewrite a traditional decision-support query using equi-joins into SQL that uses subqueries to get at the filtering keys for the fact table from the dimensional tables (see Listing 1). Note that the query rewrite uses each bitmap dimension key index one at a time and, therefore, eliminates the need for concatenated indices. Furthermore, each of the subqueries can be executed in parallel for performance increases beyond the bitmap efficiencies. All servers assigned to the query execution will participate in each of the subquery operations in conjunction with the fact table. Parallelism will be exploited within and across partitions of a very large table; if the fact table is not partitioned, then parallelism will still be exploited within the table.

 
Before Oracle8 rewrite:
select sum(sales_tx.purchase_amt)
from sales_tx
    ,store
    ,period
    ,product
where sales_tx.store_id = store.store_id
  and store.city_nm = 'boston'
  and sales_tx.purchase_dt = period.period_dt
  and period.fiscal_qtr_cd in ('97Q1','97Q2')
  and sales_tx.product_id = product.product_id
  and product.category_nm = 'shoe'
;

After Oracle8 rewrite:
select sum(sales_tx.purchase_amt)
from sales_tx
where sales_tx.store_id in
        (select store.store_id from store
         where store.store_nm = 'boston')
  and sales_tx.purchase_dt in
        (select period.period_dt from period
         where period.fiscal_qtr_cd in ('97Q1','97Q2'))
  and sales_tx.product_id in
        (select product.product_id from product
         where product.category_nm = 'shoe')
;

You can easily AND and OR the bitmaps for more complex query structures. If description attributes are needed from the dimension tables, then joins back to the dimensional table from the star-query result set will be used to obtain this information.

The storage savings resulting from avoiding construction of all combinations of concatenated indices through individual column bitmap indices are significant. Because each individual column index is built only once, redundant indices are eliminated by obviating multiple appearances in the concatenated index combinations. Bitmap indices will also often provide a more efficient storage representation than traditional B-tree implementations. A common rule of thumb is that bit indices with more than 50 or so unique values become problematic; a separate bitmap will be required for each unique value in the index representation in which the bitmap's size is based on one bit per row in the indexed table. Fifty bits of indexing overhead per row on a large table can become quite cumbersome in terms of space usage; however, Oracle8 uses compression techniques to significantly reduce this space overhead. In a recent comparison of bitmap indexing versus B-tree indexing on a table containing 40,000 distinct index-key values within a one-million-row table, the bitmap index was only 25 percent of the size of the B-tree index. Obviously, compression rates will vary with the specific demographics of the data, but it is clear that the impact is substantial.

The Oracle8 optimizer will also eliminate any processing activity against partitions in a large table that have a key-range specification outside of the qualifications of the where-clause filters in a query specification. This optimization has significant impact on queries that ask for the most recent data in a detail table comprising many partitions. Typically, such a query can be satisfied by processing against only one or two partitions from a table that may be constructed from hundreds of partitions. The bottom line result of partitioning for DSS queries is significant savings in processing time; partition pruning avoids the accessing of any parts of a large table that can be eliminated based on a comparison of partition key ranges with the SQL specification.

Benchmarks of Oracle8 versus Oracle7--using the more advanced star-query capability combining bitmap indices, parallelism, partitioning, and a more sophisticated optimizer--demonstrate up to a 300-percent increase in measured performance results for Oracle8. We expect a barrage of updated TPC-D results from the hardware vendors as Oracle8 makes its way into this benchmark.

In Oracle8, the DBA also has the option of local or global indexing for each individual table. Local indexing means that each partition in a table will be indexed individually, so a given index will only point to rows in the corresponding partition. Global indexing means that an index is built such that the pointers to data blocks will span database partitions. For DSS applications, local-index construction will almost always be the preferred option because the columns most commonly used in query structures are rarely selective to the extent that a partition's participation can be eliminated--except when access is via a partitioning key. For OLTP, global indices may be more appropriate when the index column is not a partitioning key and the selectivity of the index is very high because the application would like to avoid interrogating multiple local index structures to look for the desired rows. In DSS, we would need to look at all of the partitions anyway, so we might as well get the performance and availability advantages of partitioning on the indices along the table-partitioning boundaries. In OLTP applications where indexed access is not highly selective, or when a table is partitioned using a key that provides a primary access path, it may be advantageous to use local indexing.

Size Counts


The Oracle7 release theoretically supports up to a 32-terabyte database size. This maximum is possible only under "ideal" circumstances in which tablespace and file allocations correspond to somewhat superficial circumstances. Practical limits in Oracle7 kick in somewhere around one terabyte according to Ken Jacobs, Oracle's vice president of product strategy in server technologies. Oracle8, however, raises the theoretical limit of a database size to 512 petabytes; the product is targeted for practical implementations up to hundreds of terabytes. The key innovation that allows removal of the previous size limitation is a rework of the ROWID interpretation within the engine; the file number embedded in a ROWID is now tablespace-relative rather than global in its interpretation. This change enables tablespaces up to 64K in size, with as many as 1,023 files per tablespace.

The immediate DBA reaction to this reinterpretation of ROWIDs will be a panicked inquiry about whether all Oracle7 indices will have to be rebuilt for a conversion to Oracle8; thankfully, the answer is an emphatic "No." Although ROWIDs are embedded in the Oracle index structures, a mechanism for backward compatibility in Oracle7's interpretation of ROWIDs will be supported fully to avoid the task of index reconstruction when migrating to Oracle8.

Other size-related enhancements in Oracle8 include an increase from 254 columns to 1,000 as the maximum in a single table, as well as support for up to 4,096 characters in a single varchar2 column instantiation. Both of these limits are marketing-oriented; I would be worried seriously about any data model that results in a single table with 1,000 columns or a character string (as opposed to full-test object) with thousands of bytes.

Multi-User Support


A big part of Oracle's NCA vision involves support for consumer self-service in the electronic commerce market, with Web-enabled applications using object "cartridges" for plug-and-play capability in a globally networked computing environment. This approach leads to a requirement for continual, efficient support of potentially tens of thousands of online users.

Oracle8 has taken a number of steps to increase the efficiency of its multi-user capabilities. More sophisticated serial reuse algorithms for recycling local stored-procedure variables, automatic release of cursors under frequent circumstances of primary-key table access, enhancements to shared memory techniques, and other efficiencies reduce the per-user connect memory by as much as 30 to 60 percent over Oracle7.

In addition to reducing the per-user memory requirements for connects, Net 8 (formerly known as SQL*NET) pools concurrent users to take advantage of idle time, thereby facilitating the sharing of database connections. In many ways, pooled connections are similar to the functionality provided by a TP monitor. Pooled connections, however, allow applications to be deployed in a two-tier client/server model and, unlike a TP monitor, maintain client-user identification and state for a continuous, conversational interaction with the database. The intent is for each Oracle instance to be capable of supporting 10,000+ concurrent users. Support for over 15,000 concurrent users was recently demonstrated on a Sun cluster running Oracle8.

Oracle8 also supports the ability to multiplex user connections through connection managers. While pooled connections are ideal for many concurrent users who are not continuously active, the multiplexing approach is more appropriate for client sessions that involve a heavy workload from the database server.

Advanced failover has also been added to Net 8, whereby automatic reconnect is invoked upon a dropped connection. The failover mechanism is designed to preserve the context of any queries in progress upon the reconnect; the dropped connection becomes completely invisible from an applications perspective.

Clearly, much of the functionality usually required from a TP monitor has been embedded directly into the infrastructure database engine. These enhancements are particularly important for the development of two-tier client/ server applications in which TP monitors are not easily integrated.

Manageability Features


As total cost of ownership and availability service levels have become increasingly important in VLDB deployments, emphasis on management tools has escalated as well. Oracle8 has put a significant emphasis on graphical user interfaces and automating database administrative tasks through the Oracle Enterprise Manager. A lot more intelligence has been built into the database tools. For example, backup-and-recovery operations used to be managed primarily by operating system utilities with minimum involvement from the Oracle database. Oracle8 has moved to a server-managed backup-and-recovery model in which the recovery catalog is managed within the database (with appropriate safeguards to ensure recoverability at all times), as well as to a recovery manager (RMAN) to automate the bulk of the procedures. In the old days, online backups instigated with operating-systems utilities required Oracle to write out all changing blocks to the log files throughout the backup to ensure proper recoverability. With backup-and-recovery procedures integrated into the database server, this heavy overhead is avoided and the resulting log files are substantially smaller; Oracle8 provides automatic archiving of log files as well. Intelligent use of third-party media managers within the Oracle Enterprise Manager framework is designed to maximize the use of tape drives for backup and recovery in VLDB environments.

For the first time, Oracle8 also provides incremental and point-in-time backup-and-recovery capabilities. Incremental backup and recovery has long been lacking in the Oracle product suite, although third-party tools such as SQL*Backtrack have stepped in to fill this void. With Oracle8, incremental backups are integrated into the server through use of full-table scans to find and archive any blocks that have changed since the last backup. Although the full-table scans do not proceed in parallel, multiple incremental backups can be initiated simultaneously against distinct tables, and asynchronous I/O provides other forms of parallelism. The new point-in-time recovery capability in Oracle8 allows a tablespace to be recovered to any operator-specified point-in-time since a previously taken backup. The implementation ensures that integrity between table and index values are enforced whenever a point-in-time recovery is initiated. Obviously, referential integrity checks between tables in separate tablespaces are not enforced across time. Point-in-time recovery is particularly useful in recovering a database to a previously known "good" state subsequent to a large batch job that may have gone awry.

Performance management in an Oracle Parallel Server environment is also moving toward a more integrated database approach. In Oracle7, for performance-monitoring purposes, each instance of the database is essentially managed separately from a systems-administration point of view. In Oracle8, the V$ tables have been made global to make looking at performance statistics across all instances from a single point of control more straightforward. The underlying mechanism is to bring the local instance V$ tables together using the same inter-instance communications mechanisms used by parallel query, yielding a robust and efficient implementation of the global view of the V$ performance information.

According to Greg Doherty, director of VLDB server development at Oracle, "The bigger databases come, the harder they fall � so don't fall." This approach is central in the philosophy of the Oracle8 manageability strategy for large databases, which is based on two principles.

First, all DBA operations should be performed online so that the database does not need to be brought down for maintenance activities. Online backups are much more feasible with Oracle8's integrated backup implementation than with Oracle7. Adding, splitting, renaming, and dropping partitions is an online activity; even modification to an increasing number of the init.ora parameters is now an online operation. Some operations, particularly those related to reorganization activities associated with indices, are yet to acquire online capabilities--but it is clear that this work is underway for future releases of the engine.

The second important principle is that maintenance operations should be bounded in their processing requirements. This means that operations such as backups, index builds, and table defragmentation should not require a continuous allocation of resources proportional to table size for large tables. A general rule-of-thumb for VLDBs is that at least half of the database size will be related to a single table. Partitioning allows these huge tables to be maintained using a divide-and-conquer approach in which operations can be spread over multiple maintenance windows (or, if resources permit, be executed in parallel) with the workload easily divisible by each partition of a table.

Futures


Forthcoming releases will continue to focus on VLDB implementations for enterprise-systems solutions. One of the more immediate enhancements planned for Oracle8.1 is to improve large-scale OLTP performance by eliminating the block-ping problem in read operations. Today, if one instance has an updated block cached in its System Global Area (SGA) and another instance needs to read the block, the Distributed Lock Manager will force the owning instance to flush the block to disk so that the requesting instance can read the most up-to-date copy of the block.

This forced I/O activity is very expensive. In Oracle8.1, the plan is to implement a function-shipping model in which the requesting instance will issue its read request to the instance that has cached the updated block for servicing without forcing an I/O operation. The owning instance for the block will service the request directly out of memory by using its database buffer-cache mechanism. Although latency will still be associated with the DLM traffic and IPC in this scenario, the most costly aspect of the block-ping in the form of forced I/O activity is eliminated for situations in which read requests are issued against updated blocks cached in another instance's SGA. Although Oracle has not announced a ship date for the 8.1 release, expect this enhancement to be available sometime in 1998.

Now that Oracle8 is out the door, architects are busily putting together the plans for the Oracle9 product concept. Although the specifics are still fuzzy, it's likely that shared-nothing tablespaces will be supported so that a table can be partitioned, with each partition specifically associated with its own Oracle instance. Shared-nothing tablespaces would be a specification option for a table that would likely be chosen for very large tables in high-end OLTP applications to eliminate the block-ping situation when multiple instances compete for the same data blocks. Oracle also intends to support all appropriate combinations of local and global indexing with shared-everything and shared-nothing tablespaces. The company is clearly giving the database enough configuration capability to optimize the physical DBMS design for OLTP, DSS, or even hybrid implementations.

Stephen Brobst is a senior consultant for Tanning Technology Corp. in Denver, as well as a founder and managing partner of Strategic Technologies & Systems in Boston. You can email Stephen at [email protected]. mit.edu.

Robert Funck is also a senior consultant at Tanning Technology Corp. Previously, he was the head of worldwide field support for the Oracle Parallel Systems division. You can reach him via email at rfunck@ tanning.com.



This is a copy of an article published @ http://www.oreview.com/