This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
This course focuses on those features of Oracle Database 11g that are applicable to database administration. Previous experience with Oracle databases (particularly Oracle Database 10g) is required for a full understanding of many of the new features. Hands-on practices emphasize functionality rather than test knowledge.
Overview This course is designed to introduce you to the new features of Oracle Database 11g that are applicable to the work usually performed by database administrators and related personnel. The course does not attempt to provide every detail about a feature or cover aspects of a feature that were available in previous releases (except when defining the context for a new feature or comparing past behavior with current behavior). Consequently, the course is most useful to you if you have already administered other versions of Oracle databases, particularly Oracle Database 10g. Even with this background, you should not expect to be able to implement all of the features discussed in the course without supplemental reading, especially the Oracle Database 11g documentation. The course consists of instructor-led lessons and demonstrations, plus many hands-on practices that allow you to see for yourself how certain new features behave. As with the course content in general, these practices are designed to introduce you to the fundamental aspects of a feature. They are not intended to test your knowledge of unfamiliar syntax or to provide an opportunity for you to examine every nuance of a new feature. The length of this course precludes such activity. Consequently, you are strongly encouraged to use the provided scripts to complete the practices rather than struggle with unfamiliar syntax.
Oracle Database 10g: New Features for Administrators I-2
Oracle Database Innovation 30 years of sustained innovation…
Audit Vault Database Vault Grid Computing Self Managing Database XML Database Oracle Data Guard Real Application Clusters Flashback Query Virtual Private Database
Built in Java VM Partitioning Support Built in Messaging Object Relational Support Multimedia Support Data Warehousing Optimizations Parallel Operations Distributed SQL & Transaction Support Cluster and MPP Support Multi-version Read Consistency Client/Server Support Platform Portability Commercial SQL Implementation
Oracle Database Innovation As a result of early focus on innovation, Oracle has maintained the lead in the industry with a huge number of trend-setting products. The continued focus on Oracle’s key development areas has lead to a number of industry firsts, from the first commercial relational database, to the first portable tool set and UNIXbased client–server applications, to the first multimedia database architecture.
Oracle Database 10g: New Features for Administrators I-3
Customer Testimonials
“Oracle customers are highly satisfied with its Real Application Clusters and Automatic Storage Management when pursuing scale-out strategies.”
Mark Beyer, Gartner December 2006
“By consolidating with Oracle grid computing on Intel/Linux, we are witnessing about a 50% reduction in costs with increased performance.”
Tim Getsay, Assistant Vice Chancellor Management Information Systems Vanderbilt University
Customer Testimonials Managing service level objectives is an ongoing challenge. Users expect fast, secure access to business applications 24x7, and Information Technology managers have to deliver without increasing costs and resources. The manageability features in Oracle Database 11g are designed to help organizations easily manage Infrastructure Grids and deliver on their users’ service level expectations. Oracle Database 11g introduces more self-management, automation and advisors that help reduce management costs, while increasing the performance, scalability and security of their business applications around the clock.
Oracle Database 10g: New Features for Administrators I-4
Enterprise Grid Computing Oracle Database 10g was the first database designed for grid computing. Oracle Database 11g consolidates and extends Oracle’s unique ability to deliver the benefits of Grid computing. Oracle Infrastructure Grids fundamentally changed the way data centers look and operate, transforming data centers from silos of isolated system resources to shared pools of servers and storage. Oracle’s unique Grid architecture enables all types of applications to scale-out server and storage capacity on-demand. By clustering low cost commodity server and storage modules on Infrastructure Grids, organizations are able to improve user service levels, reduce downtime, and make more efficient use of their IT resources. Oracle Database 11g furthers the adoption of Grid Computing by offering: - Unique scale-out technology with single database image - Lower server and storage costs - Increased availability and scalability
Oracle Database 10g: New Features for Administrators I-5
Oracle Database 11g: Focus Areas
• • • • •
I-6
Manageability Availability Performance Business Intelligence and Data Warehousing Security
Oracle Database 11g: Focus Areas Oracle’s Infrastructure Grid technology enables Information Technology systems to be built out of pools of low cost servers and storage that deliver the highest quality of service in terms of manageability, high availability, and performance. Oracle’s existing Grid capabilities are extended in the areas listed on the slide making your databases more manageable. Manageability: New manageability features and enhancements increase DBA productivity, reduce costs, minimize errors, and maximize quality of service through change management, additional management automation and fault diagnosis. Availability: New high availability features further reduce the risk of downtime and data loss including further disaster recovery offerings, important high availability enhancements to Automatic Storage Management, support for online database patching, improved online operations, and more. Performance: Many innovative new performance capabilities are offered including SecureFiles, compression for OLTP, Real Application Clusters optimizations, Result Query Caches, TimesTen enhancements, and more.
Oracle Database 10g: New Features for Administrators I-6
Oracle Database 11g: Focus Areas
•
Information Management – – – – –
•
Application Development – – – –
I-7
Content Management XML Oracle Text Spatial Multimedia and Medical Imaging PL/SQL .NET PHP SQL Developer
Oracle Database 11g: Focus Areas Oracle’s Infrastructure Grid provides the additional functionality needed to manage all information in the enterprise with robust security, information lifecycle management, and integrated business intelligence analytics to support fast and accurate business decisions at the lowest cost.
Oracle Database 10g: New Features for Administrators I-7
Management Automation Oracle Database 11g continues the effort begun in Oracle9i and carried on through Oracle Database 10g to dramatically simplify and ultimately fully automate the tasks that DBAs need to perform. New in Oracle Database 11g is Automatic SQL Tuning with self-learning capabilities. Other new capabilities include automatic, unified tuning of both SGA and PGA memory buffers and new advisors for partitioning, database repair, streams performance, and space management. Enhancements to the Oracle Automatic Database Diagnostic Monitor (ADDM) give it a better global view of performance in Oracle Real Application Clusters (RAC) environments and improved comparative performance analysis capabilities.
Oracle Database 10g: New Features for Administrators I-8
Self-managing Database: Oracle Database 10g Self-managing is an ongoing goal for the Oracle Database. Oracle Database 10g mark the beginning of a huge effort to render the database more easy to use. With Oracle Database 10g, the focus for self-managing was more on performance and resources.
Oracle Database 10g: New Features for Administrators I-9
Self-managing Database: The Next Generation
Manage Performance and Resources Manage Change Manage Fault
Self-managing Database: The Next Generation Oracle Database 11g adds two more important axes to the overall self-management goal: Change management, and fault management.
Oracle Database 10g: New Features for Administrators I-10
Suggested Additional Courses
• • •
I-11
Oracle Database 11g: Real Application Clusters Oracle Database 11g: Data Guard Administration Oracle Enterprise Manager 11g Grid Control
Suggested Additional Courses For more information about key grid computing technologies used by Oracle products, you can take additional courses (listed in the slide) from Oracle University.
Oracle Database 10g: New Features for Administrators I-11
Further Information
For more information about topics that are not covered in this course, refer to the following: • Oracle Database 11g: New Features eStudies – http://www.oracle.com/education/library A comprehensive series of self-paced online courses covering all new features in great detail
•
Oracle by Example series: Oracle Database 11g – http://otn.oracle.com/obe/obe11gdb/index.html
Suggested Schedule The lessons in this guide are arranged in the order you will probably study them in class. The lessons are grouped into topic areas, but they are also organized by other criteria, including the following: • A feature is introduced in an early lesson and then referenced in later lessons. • Topics alternate between difficult and easy to facilitate learning. • Lessons are supplemented with hands-on practices throughout the course to provide regular opportunities for students to explore what they are learning. If your instructor teaches the class in the sequence in which the lessons are printed in this guide, then the class should run approximately as shown in the schedule. Your instructor may vary the order of the lessons, however, for a number of valid reasons. These include: • Customizing material for a specific audience • Covering a topic in a single day instead of splitting the material across two days • Maximizing the use of course resources (such as hardware and software)
Oracle Database 10g: New Features for Administrators I-13
Oracle Database 11g: New Features for Administrators 1 - 1
Objectives
After completing this lesson, you should be able to: • Install Oracle Database 11g • Upgrade your database to Oracle Database 11g • Use online patching
Oracle Database 11g: New Features for Administrators 1 - 2
Oracle Database 11g Installation Changes
•
•
1-3
Minor modifications to the install flow. New screens for: – Turning off secure configuration in the seed database – Setting the out-of-box memory target – Specifying the Database Character set – Modifications to OS authentication to support SYSASM Addition of new products to the install – SQL Developer – Movement of APEX from companion CD to main CD – Warehouse Builder (server-side pieces) – Oracle Configuration Management (OCM) – New Transparent Gateways
Oracle Database 11g Installation Changes The following is a list of components that were part of Oracle Database 10g release 2 (10.2), and are not available for installation with Oracle Database 11g: iSQL*Plus Oracle Workflow Oracle Data Mining Scoring Engine Oracle Enterprise Manager Java console
Oracle Database 11g: New Features for Administrators 1 - 4
Oracle Database 11g Installation Changes
• Minor changes to the clusterware installation – Support for block devices for storage of OCR and Voting Disks – Ship “fix-up” scripts with the product
• Support for upgrade of XE databases directly to 11g • Better conformance to OFA in the installation – Prompt for ORACLE_BASE explicitly – Warnings in the alert log when ORACLE_BASE isn’t set
Oracle Database 11g Installation Changes In Oracle Database 11g, Oracle Universal Installer prompts you to specify the Oracle base. The Oracle base you provide during the installation gets logged in the local inventory. You can share this Oracle base across all of the Oracle homes you create on the system. Oracle recommends that you share an Oracle base for all of the Oracle homes created by a user. Each Oracle home has a corresponding Oracle base. Oracle Universal Installer has a list box where you can edit or select the Oracle base. The installer derives the default Oracle home from the Oracle base location you provide in the list box. However, you can change the default Oracle home by editing the location. The following are the changes made in Oracle Database 11g with respect to Oracle base to make it Optimal Flexible Architecture compliant: ORACLE_BASE is a recommended environment variable. However, this variable will be made mandatory starting in future releases. By default, Oracle base and Oracle Clusterware home are at the same directory level during the Oracle Clusterware installation. You should not create Oracle Clusterware home under Oracle base. Specifying Oracle Clusterware home under Oracle base results in an error. Oracle recommends that you create the flash recovery area and data file location under Oracle base. In Oracle Database 10g, the default locations for the flash recovery area and data file are one level above the Oracle home directory. However, in Oracle database 11g, Oracle base is the starting point to set the default locations for flash recovery and data file. However, Oracle recommends 11g:and New foron Administrators that you keepOracle the flashDatabase recovery area dataFeatures file location separate disks. 1 - 5
Oracle Database Upgrade Enhancements
• • • •
1-6
Pre-Upgrade Information Tool Simplified Upgrade Upgrade performance enhancement Post-Upgrade Status Tool
Oracle Database Upgrade Enhancements Oracle Database 11g release 1 (11.1) continues to make improvements to simplify manual upgrades, upgrades performed using Database Upgrade Assistant (DBUA), and downgrades. DBUA provides the following enhancements for single-instance databases: • Support for improvements to the pre-upgrade tool in the areas of space estimation, initialization parameters, statistics gathering, and new warnings. • The catupgrd.sql script performs all upgrades and the catdwgrd.sql script performs all downgrades, for both patch releases and major releases. • DBUA can automatically take into account multi-CPU systems to perform parallel object recompilation. • Errors are now collected as they are generated during the upgrade and displayed by the PostUpgrade Status Tool for each component.
Oracle Database 11g: New Features for Administrators 1 - 6
Pre-Upgrade Information Tool
• SQL script, utlu111i.sql, analyzes the database to be upgraded • Checks for parameter settings that may cause upgrade to fail and generates warnings • Utility runs in “old server” & “old database” context • Provides guidance and warnings based on Oracle Database 11g Release 1 upgrade requirements • Supplies information to the DBUA to automatically perform any required actions
Pre-Upgrade Information Tool The pre-upgrade information tool analyzes the database to be upgraded. It is a SQL script that ships with Oracle Database 11g release 1 (11.1), and must be run in the environment of the database being upgraded. This tool displays warnings about possible upgrade issues with the database. It also displays information about required initialization parameters for Oracle Database 11g release 1 (11.1).
Oracle Database 11g: New Features for Administrators 1 - 7
Pre-Upgrade Analysis
The Pre-Upgrade Information Tool checks for: • Database version and compatibility • Redo log size • Updated initialization parameters (e.g. shared_pool_size) • Deprecated and obsolete initialization parameters • Components in database (JAVAVM, Spatial, etc.) • Tablespace estimates – Increase in total size – Additional allocation for AUTOEXTEND ON – SYSAUX tablespace
Oracle Database 11g: New Features for Administrators 1 - 8
Simplified Upgrade
• Upgrade driven from the contents of the component registry (DBA_REGISTRY view) • Single top-level script, catupgrd.sql, upgrades all components in the database using the information in the DBA_REGISTRY view • Supports re-run of catupgrd.sql, if necessary
Oracle Database 11g: New Features for Administrators 1 - 9
Startup Upgrade
STARTUP UPGRADE mode will suppress normal upgrade errors: • Previously, STARTUP MIGRATE in Oracle Database 9i R2 • Only real errors are spooled • Automatically handles setting system parameters that can otherwise cause problems during upgrade – Turns off job queues – Disables system triggers – Allows AS SYSDBA connections only
Startup Upgrade STARTUP UPGRADE enables you to open a database based on an earlier Oracle Database release. It also restricts logons to AS SYSDBA sessions, disables system triggers, and performs additional operations that prepare the environment for the upgrade (some of which are listed on the slide).
Oracle Database 11g: New Features for Administrators 1 - 10
Upgrade Performance Enhancement
Parallel recompilation of invalid PL/SQL database objects on multiprocessor CPUs: • Utlrp.sql can now exploit multiple CPUs to speed up the time required to recompile any stored PL/SQL and Java code.
Upgrade Performance Enhancement This script is a wrapper based on the UTL_RECOMP package. UTL_RECOMP provides a more general recompilation interface, including options to recompile objects in a single schema. Please see the documentation for package UTL_RECOMP for more details. By default this script invokes the utlprp.sql script with 0 as the degree of parallelism for recompilation. This means that UTL_RECOMP will automatically determine the appropriate level of parallelism based on Oracle parameters cpu_count and parallel_threads_per_cpu. If the parameter is 1, sequential recompilation is used.
Oracle Database 11g: New Features for Administrators 1 - 11
Post-Upgrade Status Tool
Run utlu111s.sql to display the results of the upgrade • Error logging now provides more information per component • Reviews the status of each component and lists the elapsed time • Provides information about invalid/incorrect component upgrades • Run this tool after the upgrade completes to see errors and check the status of the components
Post-Upgrade Status Tool The Post-Upgrade Status Tool provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed. Run utlu111s.sql to display the results of the upgrade.
Oracle Database 11g: New Features for Administrators 1 - 12
Rerun the Upgrade
Oracle Database 11.1 Upgrade Status Utility 03-18-2007 Component Status Version Oracle Server VALID 11.1.0.4.0 JServer JAVA Virtual Machine VALID 11.1.0.4.0 Oracle Workspace Manager VALID 11.1.0.4.0 Oracle Enterprise Manager VALID 11.1.0.4.0 Oracle XDK VALID 11.1.0.4.0 Oracle Text VALID 11.1.0.4.0 Oracle XML Database VALID 11.1.0.4.0 Oracle Database Java Packages VALID 11.1.0.4.0 Oracle interMedia VALID 11.1.0.4.0 Spatial ORA-04031: unable to allocate 4096 bytes of shared memory pool","java/awt/FrameSYS","joxlod exec hp",":SGAClass") ORA-06512: at "SYS.DBMS_JAVA", line 704 INVALID 11.1.0.4.0
Rerun the Upgrade The Post-Upgrade Status Tool should report VALID status for all components at the end of the upgrade. The following list shows and briefly describes other status values that you might see: As shown on the slide, the report returns INVALID for the Spatial component. This is because of the ORA-04031 error. In this case, you should fix the problem, then running utlrp.sql might change the status to VALID without rerunning the entire upgrade. Check the DBA_REGISTRY view after running utlrp.sql. If that does not fix the problem, or if you see UPGRADING status, the component upgrade did not complete. Resolve the problem and rerun catupgrd.sql after you shutdown immediate followed by a startup upgrade.
Oracle Database 11g: New Features for Administrators 1 - 13
Oracle Database 11g: New Features for Administrators 1 - 14
Prepare to Upgrade
1. Become familiar with the features of the New Oracle Database 11g Release 1 2. Determine the upgrade path 3. Choose an upgrade method 4. Choose an OFA compliant Oracle Home directory 5. Prepare a backup and recovery strategy 6. Develop a test plan to test your database, applications, and reports
Prepare to Upgrade Before you upgrade your database, you should perform the following steps: 1. Become familiar with the features of Oracle Database 11g release 1 (11.1). 2. Determine the upgrade path to the new release. 3. Choose an upgrade method. 4. Choose an Oracle home directory for the new release. 5. Prepare a backup and recovery strategy 6. Develop a testing plan.
Oracle Database 11g: New Features for Administrators 1 - 15
Oracle Database 11g Release 1 Upgrade Paths
• Direct upgrade to 11g is supported from 9.2.0.4 or higher, 10.1.0.2 or higher, and 10.2.0.1 or higher. • If you are not at one of these versions you need to perform a “double-hop” upgrade • For example: – 7.3.4 -> 9.2.0.8 -> 11.1 – 8.1.7.4->9.2..0.8->11.1
Oracle Database 11g Release 1 Upgrade Paths The path that you must take to upgrade to Oracle Database 11g release 1 (11.1) depends on the release number of your current database. It might not be possible to upgrade directly from your current version of Oracle Database to the latest version. Depending on your current release, you might be required to upgrade through one or more intermediate releases to upgrade to Oracle Database 11g release 1 (11.1). For example, if the current database is running release 8.1.6, then follow these steps: 1. Upgrade release 8.1.6 to release 8.1.7A using the instructions in Oracle8i Migration Release 3 (8.1.7). 2. Upgrade release 8.1.7A to 9.2.0.8 using the instructions in Oracle9i Database Migration Release 2 (9.2). 3. Upgrade release 9.2.0.8 to Oracle Database 11g release 1 (11.1) using the instructions in this lesson.
Oracle Database 11g: New Features for Administrators 1 - 16
Choose an Upgrade Method
• Database Upgrade Assistant (DBUA) – Automated GUI tool that interactively steps the user through the upgrade process and configures the database to run with Oracle Database 11g Release 1
• Manual Upgrade – Use SQL*Plus to perform any necessary actions to prepare for the upgrade, run the upgrade scripts and analyze the upgrade results
• Export-Import – Use Data Pump or original Export/Import
Choose an Upgrade Method Oracle Database 11g release 1 (11.1) supports the following tools and methods for upgrading a database to the new release: • Database Upgrade Assistant (DBUA) provides a graphical user interface (GUI) that guides you through the upgrade of a database. DBUA can be launched during installation with the Oracle Universal Installer, or you can launch DBUA as a standalone tool at any time in the future. DBUA is the recommended method for performing a major release upgrade or patch release upgrade. • Manual upgrade using SQL scripts and utilities provide a command-line upgrade of a database, using SQL scripts and utilities. • Export and Import utilities use the Oracle Data Pump Export and Import utilities, available as of Oracle Database 10g release 1 (10.1), or the original Export and Import utilities to perform a full or partial export from your database, followed by a full or partial import into a new Oracle Database 11g release 1 (11.1) database. Export/Import can copy a subset of the data, leaving the database unchanged. • CREATE TABLE AS SQL statement copies data from a database into a new Oracle Database 11g release 1 (11.1) database. Data copying can copy a subset of the data, leaving the database unchanged.
Oracle Database 11g: New Features for Administrators 1 - 17
Automates all tasks Performs both Release and Patch set upgrades Supports RAC, Single Instance and ASM Informs user and fixes upgrade prerequisites Automatically reports errors found in spool logs Provides complete HTML report of the upgrade process Command line interface allows ISVs to automate
• Disadvantages – Offers less control over individual upgrade steps
Oracle Database 11g: New Features for Administrators 1 - 19
Sample Test Plan
• Make a clone of your production system using Enterprise Manager • Upgrade test database to latest version • Update COMPATIBLE to latest version • Run your applications, reports, and legacy systems • Ensure adequate performance by comparing metrics gathered before and after upgrade • Tune queries or problem SQL statements • Update any necessary database parameters
Oracle Database 11g: New Features for Administrators 1 - 21
Performing a Manual Upgrade - 1
1. Install Oracle Database 11g Release 1 in new ORACLE_HOME 2. Analyze the existing database – Use rdbms/admin/utlu111i.sql with existing server – SQL> spool pre_upgrade.log – SQL> @utlu111i
3. Adjust redo logs and tablespace sizes if necessary 4. Copy existing initialization files to new ORACLE_HOME and make adjustments as recommended 5. Shutdown immediate, backup, then switch to the new ORACLE_HOME
Note: catuppst.sql is the post-upgrade script that performs remaining upgrade actions that do not require that the database be open in UPGRADE mode. It can be run at the same time utlrp.sql is being run.
Oracle Database 11g: New Features for Administrators 1 - 24
Now you are ready to use Oracle Database 11g Release 1! • Perform any required post-upgrade steps • Make additional post-upgrade adjustments to initialization parameters • Test your applications and tune performance • Finally, set initialization parameter COMPATIBLE to 11.1 to make full use of Oracle Database 11g Release 1 features • 10.0.0 is the minimum compatibility required for 11.1
Oracle Database 11g: New Features for Administrators 1 - 26
Downgrading a Database - 1 1. Major release downgrades are supported back to 10.2 and 10.1 2. Can only downgrade back to the release from which you upgraded 3. Shutdown and start up the instance in DOWNGRADE mode –
SQL> startup downgrade
4. Run the downgrade script which automatically determines the version of the database and calls the specific component scripts – –
SQL> SPOOL downgrade.log SQL> @catdwgrd.sql
5. Shutdown the database immediately after the downgrade script ends –
Oracle Database 11g: New Features for Administrators 1 - 29
Database Upgrade Assistant (DBUA) • DBUA is a GUI and command line tool for performing database upgrades • Uses a Wizard Interface – Automates the upgrade process – Simplifies detecting and handling upgrade issues
• Supported Releases for 11g – 9.2, 10.1 and 10.2
• Patchset Upgrades – Supported 10.2.0.3 onwards
• Support the following database types – Single instance – Real Application Clusters – Automatic Storage Management 1 - 30
Oracle Database 11g: New Features for Administrators 1 - 30
Key DBUA Features - 1
• Upgrade Scripts – Runs all necessary scripts to perform the upgrade
• Progress – Displays upgrade progress at a component level
• Configuration Checks – Automatically makes appropriate adjustments to initialization parameters – Checks for adequate resources such as SYSTEM tablespace size, rollback segments size, redo log size – Checks disk space for auto extended datafiles – Creates mandatory SYSAUX tablespace – Space Usage summary in SpaceUsage.txt
Oracle Database 11g: New Features for Administrators 1 - 31
Key DBUA Features - 2
• Recoverability – Performs a backup of the database before upgrade – If needed can restore the database after upgrade
• Pre-Upgrade Summary – Prior to upgrade provides summary of all actions to be taken – Wizard warns user about any issues found – Provides space analysis information for backup – Applies required changes to network configuration files
Oracle Database 11g: New Features for Administrators 1 - 32
Key DBUA Features - 3 • Configuration files – Creates init.ora and spfile in new ORACLE_HOME – Updates network configurations – Uses OFA compliant locations – Updates database information on Oracle Internet Directory
• Oracle Enterprise Manager – Allows you to setup and configure EM DB Control – Allows you to register database with EM Grid Control – If EM is in use upgrades EM repository and makes necessary configuration changes
• Logging and tracing – Writes detailed trace and logging files (ORACLE_BASE/cfgtoollogs/dbua/<sid>/upgradeNN) 1 - 33
Oracle Database 11g: New Features for Administrators 1 - 33
Key DBUA Features - 4
• Real Application Clusters – All nodes are upgraded – All configuration files are upgraded
• Minimizing Downtime – Speeds up upgrade by disabling archiving – Recompiles packages in parallel – User interaction is not required after upgrade starts
• Security features – Locks new users in the upgraded database
Command Line Syntax When invoked with the -silent command line option, DBUA operates in silent mode. In silent mode, DBUA does not present a user interface. It also writes any messages (including information, errors, and warnings) to a log file in ORACLE_HOME/cfgtoollogs/dbua/SID/upgraden, where n is the number of upgrades that DBUA has performed as of this upgrade. For example, the following command upgrades a database named ORCL in silent mode: dbua silent -dbName ORCL & Here is a list of important options you can use: • -backupLocation directory Specifies a directory to back up your database before the upgrade starts • -postUpgradeScripts script [, script ] ... Specifies a comma-delimited list of SQL scripts. Specify complete path names. The scripts are executed at the end of the upgrade. • -initParam parameter=value [, parameter=value ] ... Specifies a comma-delimited list of initialization parameter values of the form name=value • -emConfiguration {CENTRAL|LOCAL|ALL|NOBACKUP|NOEMAIL|NONE} Specifies Oracle Enterprise Manager management options. Note: For more information on these options refer to the Oracle Database Upgrade guide.
Oracle Database 11g: New Features for Administrators 1 - 37
Using DBUA to Upgrade Your Database Complete the following steps to upgrade a database using the DBUA graphical user interface: On Linux or UNIX platforms, enter the dbua command at a system prompt in the Oracle Database 11g release 1 (11.1) environment. The DBUA Welcome screen appears. Click Next. If an Automatic Storage Management (ASM) instance is detected on the system, then the Upgrade Operations page appears with options to upgrade a database or an ASM instance. If no ASM instance is detected, then Databases screen appears. At the Upgrade Operations page, select Upgrade a Database. This operation upgrades a database to Oracle Database 11g release 1 (11.1). Oracle recommends that you upgrade the database and ASM in separate DBUA sessions, in separate Oracle homes. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 38
Choose Database to Upgrade and Diagnostic Destination
Choose Database to Upgrade and Diagnostic Destination The Databases screen appears. Select the database you want to upgrade from the Available Databases table. You can select only one database at a time. If you do not see the database that you want, then make sure an entry with the database name exists in the oratab file in the etc directory. If you are running DBUA from a user account that does not have SYSDBA privileges, then you must enter the user name and password credentials to enable SYSDBA privileges for the selected database. Click Next. DBUA analyzes the database, performing the following pre-upgrade checks and displaying warnings as necessary: • Redo log files whose size is less than 4 MB. If such files are found, then DBUA gives the option to drop/create new redo log files. • Obsolete or deprecated initialization parameters. When DBUA finishes its checks, the Diagnostic Destination screen appears. Do one of the following: • Accept the default location for your diagnostic destination • Enter the full path to a different diagnostic destination in the Diagnostic Destination field. Click Browse to select a diagnostic destination Click Next. Oracle Database 11g: New Features for Administrators 1 - 39
Moving Database Files If you are upgrading a single-instance database, then the Move Database Files screen appears. If you are upgrading an Oracle Real Application Clusters database, then the Move Database Files screen does not appear. Select one of the following options: • Do Not Move Database Files as Part of Upgrade • Move Database Files during Upgrade If you choose to move database files, then you must also select one of the following: • File System: Your database files are stored on the host file system. • Automatic Storage Management (ASM): Your database files are stored on ASM storage, which must already exist on your system. If you do not have an ASM instance, you can create one using DBCA and then restart DBUA. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 40
Database File Locations The Database File Locations screen appears. Select one of the following options: • Use Common Location for All Database Files. If you choose to have all of your database files in one location, then you must also do one of the following: - Accept the default location for your database files - Enter the full path to a different location in the Database Files Location field - Click Browse and select a different location for your database files • Use Oracle-Managed Files. If you choose to use Oracle-Managed Files for your database files, then you must also do one of the following: - Accept the default database area - Enter the full path to a different database area in the Database Area field - Click Browse and select a different database area • Use a Mapping File to Specify Location of Database Files. This option enables you to specify different locations for your database files. A sample mapping file is available in the logging location. You can edit the property values of the mapping file to specify a different location for each database file. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 41
Recovery Configuration The Recovery Configuration screen allows you to designate a Flash Recovery Area for your database. If you selected the Move Database Files during Upgrade, or if an Oracle Express Edition database is being upgraded to Oracle Enterprise Edition, then a Flash recovery Area must be configured. If a Flash Recovery Area is already configured, then current settings are retained but the screen will come up to allow you to override these values. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 42
Management Options and Database Credentials If no other database is already being monitored with Enterprise Manager, then the Management Options screen appears. At the Management Options screen, you have the option of setting up your database so it can be managed with Enterprise Manager. Before you can register the database with Oracle Enterprise Manager Grid Control, an Oracle Enterprise Manager Agent must be configured on the host computer. To set up your database to be managed with Enterprise Manager, select Configure the Database with Enterprise Manager and then select one of the proposed options. Click Next. The Database Credentials screen appears. Choose one of the proposed options and click Next.
Oracle Database 11g: New Features for Administrators 1 - 43
Network Configuration If DBUA detects multiple listeners are configured, then the Network Configuration for the Database screen appears. The Network Configuration screen has two tabs. The Listeners tab is displayed if you have more than one listener. The Directory Service tab shows up if you have directory services configured. On the Listeners tab, select one of the following options: • Register this database with all the listeners • Register this database with selected listeners only If you choose to register selected listeners only, then you must select the listeners you want in the Available Listeners list and use the arrow buttons to move them to the Selected Listeners list. If you want to register your database with a directory service, then click the Directory Service tab. On the Directory Service tab, select one of the following options: • Yes, register the database: Selecting this option enables client computers to connect to this database without a local name file (tnsnames.ora) and also enables them to use the Oracle Enterprise User Security feature. • No, don't register the database If you choose to register the database, then you must also provide a user distinguished name (DN) in the User DN field and a password for that user in the Password field. An Oracle wallet is created as part of database registration. It contains credentials suitable for password authentication between this database and the directory service. Enter a password in the Wallet Password andDatabase Confirm Password fields. Oracle 11g: New Features for Administrators 1 - 44 Click Next.
Recompile Invalid Objects The Recompile Invalid Objects screen appears. Select Recompile invalid objects at the end of upgrade if you want DBUA to recompile all invalid PL/SQL modules after the upgrade is complete. This ensures that you do not experience any performance issues later, as you begin using your newly upgraded database. If you have multiple CPUs, then you can reduce the time it takes to perform this task by taking advantage of parallel processing on your available CPUs. If you have multiple CPUs available, then DBUA automatically adds an additional section to the Recompile Invalid Objects screen and automatically determines the number of CPUs you have available. DBUA also provides a recommended degree of parallelism, which determines how many parallel processes are used to recompile your invalid PL/SQL modules. Specifically, DBUA sets the degree of parallelism to one less than the number of CPUs you have available. You can adjust this default value by selecting a new value from the Degree of Parallelism menu. Select Turn off Archiving and Flashback logging for the duration of upgrade to reduce the time required to complete the upgrade. If the database is in ARCHIVELOG or flashback logging mode, then DBUA gives you the choice of turning them off for the duration of the upgrade. If you choose this option, Oracle recommends that you perform an offline backup immediately after the upgrade. Click Next. Oracle Database 11g: New Features for Administrators 1 - 45
Database Backup and Space Checks The Backup screen appears. Select Backup database if you want DBUA to back up your database for you. Oracle strongly recommends that you back up your database before starting the upgrade. If errors occur during the upgrade, you might be required to restore the database from the backup. If you use DBUA to back up your database, then it makes a copy of all your database files in the directory you specify in the Backup Directory field. DBUA performs this cold backup automatically after it shuts down the database and before it begins performing the upgrade procedure. The cold backup does not compress your database files, and the backup directory must be a valid file system path. You cannot specify a raw device for the cold backup files. In addition, DBUA creates a batch file in the specified directory. You can use this batch file to restore the database files: • On Windows operating systems, the file is called db_name_restore.bat. • On Linux or UNIX platforms, the file is called db_name_restore.sh. If you choose not to use DBUA for your backup, then Oracle assumes you have already backed up your database using your own backup procedures. Click Next. Note: If you decide to use DBUA to backup your database, DBUA checks that you have enough space before the backup is taken. Oracle Database 11g: New Features for Administrators 1 - 46
Database Upgrade Summary The Summary screen appears. The Summary screen shows the following information about the upgrade before it starts: • Name, version, and Oracle home of the old and new databases • Database backup location, available space, and space required • Warnings ignored • Database components to be upgraded • Initialization parameters changes • Database files location • Listener registration Check all of the specifications. Then do one of the following: • Click Back if anything is incorrect until you reach the screen where you can correct it. • Click Finish if everything is correct.
Oracle Database 11g: New Features for Administrators 1 - 47
Upgrade Progress and Results The Progress screen appears, and DBUA begins the upgrade. You might encounter error messages with Ignore and Abort choices. If other errors appear, then you must address them accordingly. If an error is severe and cannot be handled during the upgrade, then you have the following choices: • Click Ignore to ignore the error and proceed with the upgrade. You can fix the problem, restart DBUA, and complete the skipped steps. • Click Abort to terminate the upgrade process. If a database backup was taken by DBUA, then it asks if you want to restore the database. After the database has been restored, you must correct the cause of the error and restart DBUA to perform the upgrade again. If you do not want to restore the database, then DBUA leaves the database in its present state so that you can proceed with a manual upgrade. After the upgrade has completed, the following message is displayed on the Progress screen: Upgrade is complete. Click "OK" to see the results of the upgrade. Click OK. The Upgrade Results screen appears. The Upgrade Results screen displays a description of the original and upgraded databases and changes made to the initialization parameters. The screen also shows the directory where various log files are stored after the upgrade. You can examine these log files to obtain more details about the upgrade process. Click Restore Database if you are not satisfied with the upgrade results. Oracle Database 11g: New Features for Administrators 1 - 48
Best Practices - 1
• The three T’s: TEST, TEST, TEST – Test the upgrade – Test the application(s) – Test the recovery strategy
• Functional Testing – Clone your production database on a machine with similar resources – Use DBUA for your upgrade – Run your application and tools to ensure they work
Best Practices – 1 Perform the planned tests on the current database and on the test database that you upgraded to Oracle Database 11g release 1 (11.1). Compare the results, noting anomalies. Repeat the test upgrade as many times as necessary. Test the newly upgraded test database with existing applications to verify that they operate properly with a new Oracle database. You also might test enhanced functions by adding available Oracle Database features. However, first make sure that the applications operate in the same manner as they did in the current database. Functional testing is a set of tests in which new and existing features and functions of the system are tested after the upgrade. Functional testing includes all database, networking, and application components. The objective of functional testing is to verify that each component of the system functions as it did before upgrading and to verify that new functions are working properly. Create a test environment that does not interfere with the current production database. Practice upgrading the database using the test environment. The best upgrade test, if possible, is performed on an exact copy of the database to be upgraded, rather than on a downsized copy or test data. Do not upgrade the actual production database until after you successfully upgrade a test subset of this database and test it with applications, as described in the next step. The ultimate success of your upgrade depends heavily on the design and execution of an appropriate backup strategy. Oracle Database 11g: New Features for Administrators 1 - 49
Gather AWR or Statspack baselines during various workloads
– Gather sample performance metrics after upgrade —
Compare metrics before and after upgrade to catch issues
– Upgrade production systems only after performance and functional goals have been met
• Pre-Upgrade Analysis – You can run DBUA without clicking finish to get a preupgrade analysis or utlu111i.sql – Read general and platform specific release notes to catch special cases 1 - 50
Best Practices – 2 Performance testing of the new Oracle database compares the performance of various SQL statements in the new Oracle database with the statements' performance in the current database. Before upgrading, you should understand the performance profile of the application under the current database. Specifically, you should understand the calls the application makes to the database server. For example, if you are using Oracle Real Application Clusters, and you want to measure the performance gains realized from using cache fusion when you upgrade to Oracle Database 11g release 1 (11.1), then make sure you record your system's statistics before upgrading. For that, you can use various V$ views or AWR/Statspack reports.
Oracle Database 11g: New Features for Administrators 1 - 50
Best Practices - 3
• Automate your upgrade – Use DBUA in command line mode for automating your upgrade – Useful for upgrading a large number of databases
• Logging – For manual upgrade, spool upgrade results and check logs for possible issues – DBUA can also do this for you
• Automatic conversion from 32 bit to 64 bit database software • Check for sufficient space in SYSTEM, UNDO, TEMP and redo log files
Best Practices - 3 If you are installing 64-bit Oracle Database 11g release 1 (11.1) software but were previously using a 32-bit Oracle Database installation, then the databases is automatically converted to 64-bit during a patch release or major release upgrade to Oracle Database 11g release 1 (11.1). You must increase initialization parameters affecting the system global area, such as sga_target and shared_pool_size, to support 64-bit operation.
Oracle Database 11g: New Features for Administrators 1 - 51
Best Practices - 4
• Use Optimal Flexibility Architecture (OFA) – Offers best practices for locate your database files, configuration files and ORACLE_HOME
• Use new features – – – – –
1 - 52
Migrate to CBO from RBO Automatic management features for SGA, Undo, PGA etc. Use AWR/ADDM to diagnose performance issues Consider using the SQL tuning advisor Change COMPATIBLE and OPTIMIZER_FEATURES_ENABLE parameters to enable new optimizer features
Best Practices – 4 Oracle recommends the Optimal Flexible Architecture (OFA) standard for your Oracle Database installations. The OFA standard is a set of configuration guidelines for efficient and reliable Oracle databases that require little maintenance. OFA provides the following benefits: • Organizes large amounts of complicated software and data on disk to avoid device bottlenecks and poor performance • Facilitates routine administrative tasks, such as software and data backup functions, which are often vulnerable to data corruption • Alleviates switching among multiple Oracle databases • Adequately manages and administers database growth • Helps to eliminate fragmentation of free space in the data dictionary, isolates other fragmentation, and minimizes resource contention. If you are not currently using the OFA standard, then switching to the OFA standard involves modifying your directory structure and relocating your database files.
Oracle Database 11g: New Features for Administrators 1 - 52
Best Practices - 5
• Use Enterprise Manager Grid Control to manage your enterprise – Use EM to setup new features and try them out – EM provides complete manageability solution for Databases, Applications, Storage, Security, Networks
• Collect Object and System Statistics to improve plans generated by CBO • Check for invalid objects in the database before upgrading – SQL> select owner, object_name, object_type, status from dba_objects where status<>'INVALID';
Best Practices – 5 When upgrading to Oracle Database 11g release 1 (11.1), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade. To decrease the amount of downtime incurred when collecting statistics, you can collect statistics prior to performing the actual database upgrade. As of Oracle Database 10g release 1 (10.1), Oracle recommends that you also use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to gather dictionary statistics in addition to database component statistics like SYS, SYSMAN, XDB, … using the DBMS_STATS.GATHER_SCHEMA_STATS procedure.
Oracle Database 11g: New Features for Administrators 1 - 53
Best Practices- 6
• Avoid upgrading in a crisis – Keep up with security alerts – Keep up with critical patches needed for your applications – Keep track of de-support schedules
• Always upgrade to latest supported version of the RDBMS • Make sure patchset is available for all your platforms • Data Vault Option needs to be turned off for upgrade
Best Practices- 6 If you have enabled Oracle Database Vault, then you must disable it before upgrading the database, and enable it again when the upgrade is finished.
Oracle Database 11g: New Features for Administrators 1 - 54
Deprecated Features in 11g Release 1
• Oracle Ultra Search • Java Development Kit (JDK) 1.4 • CTXXPATH index
Deprecated Features in 11g Release 1 The slide lists Oracle Database features deprecated in Oracle Database 11g release 1 (11.1). They are supported in this release for backward compatibility. But Oracle recommends that you migrate away from these deprecated features: • Oracle Ultra Search • Java Development Kit (JDK) 1.4: Oracle recommends that you use JDK 5.0; but JDK 1.5 is also fully supported. • CTXXPATH index: Oracle recommends that you use XMLIndex instead.
Oracle Database 11g: New Features for Administrators 1 - 55
Important Initialization Parameter Changes
• • • • •
USER_DUMP_DEST DIAGNOSTIC_DEST BACKGROUND_DUMP_DEST CORE_DUMP_DEST UNDO_MANAGEMENT not set implies AUTO mode To migrate to automatic undo management: 1. 2. 3. 4. 5.
1 - 56
Set UNDO_MANAGEMENT=MANUAL Execute your workload Execute DBMS_UNDO_ADV.RBU_MIGRATION function Create undo tablespace based on previous size result Set UNDO_MANAGEMENT=AUTO
Important Initialization Parameter Changes The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST, and CORE_DUMP_DEST parameters. Starting with Oracle Database 11g, the default location for all trace information is defined by DIAGNOSTIC_DEST which defaults to $ORACLE_BASE/diag. For more information about diagnostics, refer to the Diagnostics lesson in this course. A newly installed Oracle Database 11g instance defaults to automatic undo management mode, and if the database is created with Database Configuration Assistant, an undo tablespace is automatically created. A null value for the UNDO_MANAGEMENT initialization parameter now defaults to automatic undo management. It used to default to manual undo management mode in earlier releases. You must therefore use caution when upgrading a previous release to Oracle Database 11g.
Oracle Database 11g: New Features for Administrators 1 - 56
Important Initialization Parameter Changes (Continued) To migrate to automatic undo management, perform the following steps: 1. Set UNDO_MANAGEMENT=MANUAL. 2. Start the instance again and run through a standard business cycle to obtain a representative workload. 3. After the standard business cycle completes, run the following function to collect the undo tablespace size: DECLARE utbsiz_in_MB NUMBER; BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION; end; / This function runs a PL/SQL procedure that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. The function returns the sizing information directly. 4. Create an undo tablespace of the required size and turn on the automatic undo management by setting UNDO_MANAGEMENT=AUTO or by removing the parameter. Note: For RAC configurations, repeat these steps on all instances.
Oracle Database 11g: New Features for Administrators 1 - 57
Direct NFS Client Overview Oracle Database 10g
Optional generic configuration parameters
Oracle Database 11g
Oracle RDBMS kernel
Oracle RDBMS kernel Specific configuration parameters
DBA
Specific kernel NFS driver
Specific kernel NFS driver
Variations across platforms Lots of parameters to tune
Direct NFS Client Overview Direct NFS is implemented as a Direct Network File System client as a part of Oracle RDBMS Kernel in Oracle Disk Manager library. NAS-based storage systems use Network File System to access data. In Oracle Database 10g, NAS storage devices are accessed using the operating system provided kernel network file system driver, which require specific configuration settings to ensure its efficient and correct usage with Oracle Database. The following are the major problems that arise in correctly specifying these configuration parameters: • NFS clients are very inconsistent across platforms and vary across operating system releases. • With more than 20 parameters to tune, manageability is impacted. Oracle Direct Network File System implements NFS version 3 protocol within the Oracle RDBMS kernel. The following are the main advantages of implementing Oracle Direct NFS: • It enables a complete control over input-output path to Network File Servers. This results in a predictable performance and enables simpler configuration management and a superior diagnosability. • Its operations avoid the kernel network file system layer bottlenecks and resource limitations. However, the kernel is still used for network communication modules. • It provides a common Network File System interface for Oracle for potential use on all host platforms and supported Network File System servers. • It enables improved performance through load balancing across multiple connections to Network File System servers and deep pipelines of asynchronous input-output operations with improved concurrency. Oracle Database 11g: New Features for Administrators 1 - 58
Direct NFS Configuration
1
Mount all expected mount points using kernel NFS driver
Direct NFS Configuration By default Direct NFS attempts to serve mount entries found in /etc/mtab. No other configuration is required. You can optionally use oranfstab to specify additional Oracle specific options to Direct NFS. For example, you can use oranfstab to specify additional paths for a mount point as shown on the slide’s example. When oranfstab is placed in $ORACLE_HOME/dbs its entries are specific to a single database. However, when oranfstab is placed in /etc then it is global to all Oracle databases, and hence can contain mount points for all Oracle databases. Direct NFS looks for the mount point entries in the follwoing order: ORACLE_HOME/dbs/oranfstab, /etc/oranfstab, and /etc/mtab. It uses the first matched entry as the mount point. In all cases, Oracle requires that mount points be mounted by the kernel NFS system even when being served through Direct NFS. Oracle verifies kernel NFS mounts by cross-checking entries in oranfstab with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not serve the NFS server. Complete the following procedure to enable Direct NFS: 1. Make sure NFS mount points are mounted by your kernel NFS client. The filesystems to be used via ODM NFS should be mounted and available over regular NFS mounts in order for Oracle to retrieve certain bootstrapping information. The mount options used in mounting the filesystems are not relevant. Oracle Database 11g: New Features for Administrators 1 - 59
Direct NFS Configuration (Continued) 2. Optionally create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS: • Server: The NFS server name. • Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/Os over any remaining paths. • Export: The exported path from the NFS server. Mount: The local mount point for the NFS server. 3. Oracle Database uses the ODM library libnfsodm10.so to enable Direct NFS. To replace this standard ODM library with the ODM NFS library, complete the following steps: • Change directory to $ORACLE_HOME/lib. • Enter the following commands: cp libodm10.so libodm10.so_stub ln -s libnfsodm10.so libodm10.so Use one of the following methods to disable the Direct NFS client: • Remove the oranfstab file. • Restore the stub libodm10.so file by reversing the process you completed in step 3. • Remove the specific NFS server or export paths in the oranfstab file. Note: • If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective. • If Oracle Database is unable to open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up correctly. Additionally, an informational message will be logged into the Oracle alert and trace files indicating that Direct NFS could not be established. • With the current ODM architecture, at any given time, there can only be one active ODM implementation per instance: Using NFS ODM in an instance precludes any other ODM implementation. • The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.
Oracle Database 11g: New Features for Administrators 1 - 60
Monitoring Direct NFS Use the following views for Direct NFS management: • V$DNFS_SERVERS: Shows a table of servers accessed using Direct NFS. • V$DNFS_FILES: Shows a table of files currently open using Direct NFS. • V$DNFS_CHANNELS: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files. • V$DNFS_STATS: Shows a table of performance statistics for Direct NFS.
Oracle Database 11g: New Features for Administrators 1 - 61
Online Patching Overview
Online Patching provides the ability to: • install • enable • disable a bug fix or diagnostic patch on a running Oracle instance.
Online Patching Overview Online Patching provides the ability to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance.
Oracle Database 11g: New Features for Administrators 1 - 62
Installing an Online Patch
• Applying an online patch does not require instance shutdown, relink of the oracle binary, or instance restart. • OPatch can be used to install or uninstall an online patch. • OPatch detects conflicts between two online patches, as well as between an online patch and a conventional patch.
Installing an Online Patch Unlike with traditional patching mechanisms, applying an online patch does not require instance shutdown or restart. Similar to traditional patching, you can use OPatch to install an online patch. You can determine if a patch is an online patch using the following command: opatch query -is_online_patch <patch location> or opatch query <patch location> -all Note: The patched code is shipped as a dynamic/shared library, which is then mapped into memory by each oracle process.
Oracle Database 11g: New Features for Administrators 1 - 63
Online Patching Benefits
• No downtime and no interruption of business. • Incredibly fast install/uninstall time. • Integrated with OPatch: – conflict detection – listed in patch inventory – work in RAC environment.
• Even though the on-disk oracle binary is unchanged, online patches persist across instance shutdown and startup.
Online Patching Benefits You do not have to shutdown your database instance while you apply the online patch. Unlike conventional patching, online patching is incredibly fast to install and uninstall. Because online patching uses OPatch, you basically get all the benefits you already have with conventional patching that uses OPatch. It does not matter how long or how many times you shutdown your database, an online patch will always persist across instance shutdown and startup.
Oracle Database 11g: New Features for Administrators 1 - 64
Conventional Patching and Online Patching Conventional patching basically requires a shutdown of your database instance. Online patching does not require any downtime. Applications can keep on running while you install or uninstalling an online patch.
Oracle Database 11g: New Features for Administrators 1 - 65
Online Patching Considerations
• Online patches may not be available on all platforms. Currently available on: – Linux x86 – Linux x86-64 – Solaris SPARC64.
• Some extra memory is consumed. Exact amount depends on: – Size of the patch – Number of concurrently running oracle processes. – The minimum amount of memory is approximately 1 OS page per running oracle process.
Online Patching Considerations One Operating System (OS) page is typically 4KB on Linux x86 and 8KB on Solaris SPARC64. Counting an average of a thousand oracle processes running at the same time, that would represents around 4MB of extra memory for a small online patch.
Oracle Database 11g: New Features for Administrators 1 - 66
Online Patching Considerations
• There may be a small delay (a few seconds) before every oracle process installs/uninstalls an online patch. • Not all bug fixes and diagnostic patches are available as an online patch. • Use online patches in situations when downtime is not feasible • When downtime is possible, you should install any relevant bug fixes as conventional patches.
Online Patching Considerations The vast majority of diagnostic patches are available as online patches. For bug fixes, it really depends of their nature.
Oracle Database 11g: New Features for Administrators 1 - 67
Using Online Patching
• Shops where downtime is extremely inconvenient or impossible (24x7) • Bugs with an unknown cause, and require a series of one or more diagnostic patches
Using Online Patching A very nice use case for online patching is when you hit a bug with an unknown cause. Oracle Support provides one or more diagnostic patches that can be installed quickly to narrow down the cause of the problem.
Oracle Database 11g: New Features for Administrators 1 - 68
Summary
In this lesson, you should have learned how to: • Install Oracle Database 11g • Upgrade your database to Oracle Database 11g • Use online patching
After completing this lesson, you should be able to: • Setup ASM fast mirror resynch • Use ASM preferred mirror read • Understand scalability and performance enhancements • Setup ASM disk group attributes • Use SYSASM role • Use various new manageability options for CHECK, MOUNT, and DROP commands • Use the mb_backup, md_restore, and repair ASMCMD extensions
Without ASM Fast Mirror Resync ASM offlines a disk whenever it is unable to complete a write to an extent allocated to the disk, while writing at least one mirror copy of the same extent on another disk if ASM redundancy is used by the corresponding disk group. With Oracle Database 10g, ASM assumes that an offline disk contains only stale data and therefore it does not read from such disks anymore. Shortly after a disk is put offline, ASM drops it from the disk group by recreating the extents allocated to the disk on the remaining disks in the disk group using redundant extent copies. This process is a relatively costly operation, and may take hours to complete. If the disk failure is only a transient failure, such as failures of cables, host bus adapters, or controllers, or disk power supply interruptions, you have to add the disk back again once the transient failure is fixed. However, adding the dropped disk back to the disk group incurs an additional cost of migrating extents back onto the disk.
Oracle Database 11g: New Features for Administrators 2 - 3
ASM Fast Mirror Resync Overview ASM redundancy is used
2
Disk access failure
Secondary extent
Primary extent
1
Oracle Database 11g
4 Disk again accessible: Only need to resync modified extents
ASM Fast Mirror Resync Overview ASM fast mirror resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage. This feature assumes that the content of the affected ASM disks has not been damaged or modified. When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of a disk outage that ASM tolerates while still being able to resynchronize after you complete the repair. Note: The tracking mechanism uses one bit for each modified allocation unit. This ensures that the tracking mechanism very efficient.
Oracle Database 11g: New Features for Administrators 2 - 4
Using EM to Perform Fast Mirror Resync In Enterprise Manager (EM), when you offline an ASM disk, you are asked to confirm the operation. On the Confirmation page, you can override the default Disk Repair Time. Similarly, you can View by failure group and choose a particular failure group to offline.
Oracle Database 11g: New Features for Administrators 2 - 5
Using EM to Perform Fast Mirror Resync Similarly, you can online disks using Enterprise Manager.
Oracle Database 11g: New Features for Administrators 2 - 6
Setting Up ASM Fast Mirror Resync
ALTER DISKGROUP dgroupA SET ATTRIBUTE 'DISK_REPAIR_TIME'='3H';
ALTER DISKGROUP dgroupA OFFLINE DISKS IN FALUREGROUP controller2 DROP AFTER 5H; ALTER DISKGROUP dgroupA ONLINE DISKS IN FALUREGROUP controller2 POWER 2 WAIT;
ALTER DISKGROUP dgroupA DROP DISKS IN FALUREGROUP controller2 FORCE;
Setting Up ASM Fast Mirror Resync You setup this feature on a per disk group basis. You can do so after disk group creation using the ALTER DISKGROUP command. Use the following commands to enable ASM fast mirror resync: ALTER DISKGROUP SET ATTRIBUTE DISK_REPAIR_TIME="2D4H30M" After you repair the disk, run the SQL statement ALTER DISK GROUP DISK ONLINE. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy of all of the extents that are marked as stale on their redundant copies. You cannot apply the ONLINE statement to already dropped disks. You can view the current attribute values by querying the V$ASM_ATTRIBUTE view. You can determine the time left before ASM drops an offlined disk by querying the REPAIR_TIMER column of either V$ASM_DISK or V$ASM_DISK_IOSTAT. In addition a row corresponding to a disk resync operation will appear in V$ASM_OPERATION with the OPERATION column set to SYNC.
Oracle Database 11g: New Features for Administrators 2 - 7
Setting Up ASM Fast Mirror Resync (Continued) You can also use the ALTER DISK GROUP DISK OFFLINE SQL statement to manually bring ASM disks offline for preventative maintenance. With this command, you can specify a timer to override the one defined at the disk group level. After you complete maintenance, use the ALTER DISK GROUP DISK ONLINE statement to bring the disk online. If you cannot repair a failure group that is in the offline state, you can use the ALTER DISKGROUP DROP DISKS IN FAILUREGROUP command with the force option. This ensures that data originally stored on these disks is reconstructed from redundant copies of the data and stored on other disk in the same disk group. Note: The time elapses only when the disk group is mounted. Also changing the value of DISK_REPAIR_TIME does not affect disks previously offlined. The default setting of 3.6 hours for DISK_REPAIR_TIME should be adequate for most environments.
Oracle Database 11g: New Features for Administrators 2 - 8
ASM Preferred Mirror Read Overview When you configure ASM failure groups, ASM in Oracle Database 10g always reads the primary copy of a mirrored extent. It may be more efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended cluster configurations where reading from a local copy of an extent provides improved performance. With Oracle Database 11g, you can do this by configuring preferred mirror read using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, to specify a list of preferred mirror read names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance and reduced network traffic. The setting for this parameter is instance-specific.
Oracle Database 11g: New Features for Administrators 2 - 9
ASM Preferred Mirror Read Setup Setup
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA
On first instance
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB
On second instance
Monitor
SELECT preferred_read FROM v$asm_disk; SELECT * FROM v$asm_disk_iostat;
ASM Preferred Mirror Read Setup To configure this feature, set the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter. This parameter is a multi-valued parameter and should contain a string with a list of failure group names separated by a comma. The failure group name specified should be prefixed with its disk group name and a ‘.’ character. This parameter is dynamic and can be modified using the ALTER SYSTEM command at any time. An example is shown on the slide. This initialization parameter is only valid for ASM instances. In a stretch cluster, the failure groups specified in this parameter should only contain the disks that are local to the corresponding instance. The new column PREFERRED_READ has been added to the V$ASM_DISK view. Its format is a single character. If the disk group that the disk is in pertains to a preferred read failure group, the value of this column is Y. To identify specific performance issues with ASM preferred read failure groups, use the V$ASM_DISK_IOSTAT view. This view displays disk I/O statistics for each ASM client. If this view is queried from a database instance, only the rows for this instance are shown.
Oracle Database 11g: New Features for Administrators 2 - 10
Enterprise Manager ASM Configuration Page You can specify a set of disks as preferred disks for each ASM instance using Enterprise Manager. The preferred read attributes are instance specific. In Oracle Database 11g the Preferred Read Failure Groups field (asm_preferred_read_failure_group) is added to the configuration page. This parameter only takes effect before the diskgroup is mounted or when the diskgroup is created. It only applies to newly opened files or a newly loaded extend map for a file.
Oracle Database 11g: New Features for Administrators 2 - 11
ASM Preferred Mirror Read - Best Practice Two sites / Normal redundancy
ASM Preferred Mirror Read - Best Practice In practice, there are only a limited number of good disk group configurations in a stretch cluster. A good configuration takes into account both performance and availability of a disk group in a stretch cluster. Here are some possible examples: For a two-site stretch cluster, a normal redundancy disk group should only have two failure groups and all disks local to one site should belong to the same failure group. Also, at most one failure group should be specified as a preferred read failure group by each instance. If there are more than two failure groups, ASM may not mirror a virtual extent across both sites. Furthermore, if the site with more than two failure groups were to go down, it would take the disk group down as well. If the disk group to be created is a high redundancy disk group, at most two failure groups should be created on each site with its local disks, having both local failure groups specified as preferred read failure groups for the local instance. For a three-site stretch cluster, a high redundancy disk group with three failure groups should be used. This is for ASM to guarantee that each virtual extent has a mirror copy local to each site and the disk group is protected against a catastrophic disaster on any of the three sites.
Oracle Database 11g: New Features for Administrators 2 - 12
ASM Scalability and Performance Enhancements
• Extent size grow automatically according to file size • ASM support variable extents size to: – Raise maximum possible file size – Reduce memory utilization in shared pool
• No administration needs apart manual rebalance in case of important fragmentation
ASM Scalability and Performance Enhancements ASM Variable Size Extents is an automated feature that enables ASM to support larger file size extents while improving memory usage efficiency. In Oracle Database 11g, ASM supports variable sizes for extents of 1, 4, 16, and 64 MB. ASM uses a predetermined number of extents of each size. As soon as a file cross a certain threshold, the next extent size is used. An ASM file can begin with 1 MB extents and as the file's size increases, the extent size also increases to 4, 16, or 64 MB based on predefined file size thresholds. With this feature, fewer extent pointers are needed to describe the file and less memory is required to manage the extent maps in the shared pool, which would have been prohibitive in large file configurations. Extent size can vary both across files and within files. Variable Size Extents also enable you to deploy Oracle databases using ASM that are several hundred TB even several PB in size. The management of variable size extents is completely automated and does not require manual administration.
Oracle Database 11g: New Features for Administrators 2 - 13
ASM Scalability and Performance Enhancements (Continued) However, external fragmentation may occur when a large number of non-contiguous small data extents have been allocated and freed, and no additional contiguous large extents are available. A defragmentation operation is integrated as part of the any rebalance operation. So, as a DBA, you always have the possibility to defragment your disk group by executing a rebalance operation. Nevertheless, this should only happen very rarely because ASM also automatically perform defragmentation during extents allocation if the desired size is unavailable. This can potentially render some allocation operations longer. Note: This feature also enables much faster file opens because of the significant reduction in the amount of memory that is required to store file extents.
Oracle Database 11g: New Features for Administrators 2 - 14
ASM Scalability In Oracle Database 11g
ASM imposes the following limits: • 63 disk groups • 10,000 ASM disks • 4 petabyte per ASM disk • 40 exabyte of storage • 1 million files per disk group • Maximum file size: – External redundancy: 140 PB – Normal redundancy: 42 PB – High redundancy: 15 PB
ASM imposes the following limits: • 63 disk groups in a storage system • 10,000 ASM disks in a storage system • 4 petabyte maximum storage for each ASM disk • 40 exabyte maximum storage for each storage system • 1 million files for each disk group • Maximum files sizes depending on the redundancy type of the disk groups used: 140 PB for external redundancy (value currently greater than possible database file size), 42 PB for normal redundancy, and 15 PB for high redundancy. Note: In Oracle Database 10g, maximum ASM file size for external redundancy was 35 TB.
Oracle Database 11g: New Features for Administrators 2 - 15
SYSASM Overview • SYSASM role to manage ASM instances avoid overlap between DBAs and storage administrators SQL> CONNECT / AS SYSASM SQL> CREATE USER ossysasmusername IDENTIFIED by passwd; SQL> GRANT SYSASM TO ossysasmusername; SQL> CONNECT ossysasmusername / passwd AS SYSASM; SQL> DROP USER ossysasmusername;
• SYSDBA will be deprecated: – Oracle Database 11g Release 1 behaves as in 10g – In future releases SYSDBA privileges restricted in ASM instances
SYSASM Overview This feature introduces a new SYSASM role that is specifically intended for performing ASM administration tasks. Using the SYSASM role instead of the SYSDBA role improves security by separating ASM administration from database administration. As of Oracle Database 11g Release 1, the OS group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba. In a future release, separate groups will have to be created, and SYSDBA users will be restricted in ASM instances. Currently, as a member of the dba group you can connect to an ASM instance using the first statement above. You also have the possibility to use the combination of CREATE USER and GRANT SYSMAN SQL statements from an ASM instance to create a new SYSASM user. This is possible as long as the name of the user is an existing OS user name. These commands update the password file of each ASM instance, and do not need instance to be up and running. Similarly, you can revoke the SYSMAN role from a user using the REVOKE command, and you can drop a user from the password file using the DROP USER command. Note: With Oracle Database 11g Release 1, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.
Oracle Database 11g: New Features for Administrators 2 - 16
Using EM to Manage ASM Users EM allows you to manage the users who access the ASM instance through remote connection (using password file authentication). These users are used exclusively for the ASM instance. You only have this functionality when connected as the SYSASM user. It is hidden if you connect as SYSDBA or SYSOPER users. When you click the Create button, the Create User page is displayed. When you click the Edit button the Edit User page is displayed. By clicking the Delete button, you can delete the created users. Note: Oracle Database 11g adds the SYSASM role to the ASM instance login page.
Oracle Database 11g: New Features for Administrators 2 - 17
ASM Disk Group Compatibility
• Compatibility of each disk group is separately controllable: – ASM compatibility controls ASM metadata on disk structure – RDBMS compatibility controls minimum client level – Useful with heterogeneous environments
• Setting disk group compatibility is irreversible DB instance
ASM Disk Group Compatibility There are two kinds of compatibility applicable to ASM disk groups; dealing with the persistent data structures that describe a disk group, and the capabilities of the clients (consumers of disk groups). These attributes are called ASM compatibility and RDBMS compatibility respectively. The compatibility of each disk group is independently controllable. This is required to enable heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g. These two compatibility settings are attributes of each ASM disk group: • RDBMS compatibility refers to the minimum compatible version of the RDBMS instance that would allow the instance to mount the disk group. This compatibility dictates the format of messages that are exchanged between the ASM and database (RDBMS) instances. An ASM instance has the capability to support different RDBMS clients running at different compatibility settings. The database compatible version setting of each instance must be greater than or equal to the RDBMS compatibility of all disk groups used by that database. Database instances are typically run from a different Oracle home than the ASM instance. This implies that the database instance may be running a different software version than the ASM instance. When a database instance first connects to an ASM instance, it negotiates the highest version that they both can support. The compatibility parameter setting of the database, software version of the database and the RDBMS compatibility setting of a disk group determine if a database instance can mount a given disk group.
Oracle Database 11g: New Features for Administrators 2 - 18
ASM Disk Group Compatibility (Continued) • ASM compatibility refers to the persistent compatibility setting controlling the format of data structures for ASM metadata on disk. The ASM compatibility level of a disk group must always be greater than or equal to the RDBMS compatibility level of the same disk group. ASM compatibility is only concerned with the format of the ASM metadata. The format of the file contents is up to the database instance. For example, the ASM compatibility of a disk group can be set to 11.0 while its RDBMS compatibility could be 10.1. This implies that disk group can only be managed by ASM software whose software version is 11.0 or higher while any database client whose software version is higher than or equal to 10.1 can use that disk group. The compatibility of a disk group only needs to be advanced when there is change to either persistent disk structures or protocol messaging. However, advancing disk group compatibility is an irreversible operation. You can set the disk group compatibility by using either the CREATE DISKGROUP or ALTER DISKGROUP commands. Note: In addition to the disk group compatibilities, the compatible parameter (database compatible version) determines the features that are enabled; it applies to the database or ASM instance depending on the instance_type parameter. For example: Setting it to 10.1 would preclude use of any new features that are introduced in Oracle Database 10g (disk online/offline, variable extents, etc).
Oracle Database 11g: New Features for Administrators 2 - 19
ASM Disk Group Attributes Name
Property
Values
Description
C
1|2|4|8|16|32|64MB
Size of allocation units in the disk group
compatible.rdbms
AC
Valid database version
Format of messages exchanged between DB and ASM
compatible.asm
AC
Valid database version
Format of ASM metadata structures on disk
disk_repair_time
A
0 M to 232 D
Length time before removing a disk once OFFLINE
template.tname. redundancy
A
UNPROTECT|MIRROR|HIGH
Redundancy of specified template
template.tname. stripe
A
COARSE|FINE
Stripping attribute of specified template
au_size
CREATE DISKGROUP DATA NORMAL REDUNDANCY DISK '/dev/raw/raw1','/dev/raw/raw2' ATTRIBUTE 'compatible.asm'='11.1';
ASM Disk Group Attributes Whenever you create or alter an ASM disk group, you have the possibility to change its attributes using the new ATTRIBUTE clause of the CREATE DISKGROUP and ALTER DISKGROUP commands. These attributes are briefly summarized in the above table: • ASM enables the use of different allocation unit (AU) sizes that you specify when you create a disk group. The AU can be 1, 2, 4, 8, 16, 32, or 64 MB in size. • RDBMS compatibility: See ASM Disk Group Compatibility slide for more information. • ASM compatibility: See ASM Disk Group Compatibility slide for more information. • You can specify the DISK_REPAIR_TIME in units of minute (M), hour (H), or day (D). If you omit the unit, then the default is H. If you omit this attribute, then the default is 3.6H. You can override this attribute with an ALTER DISKGROUP ... DISK OFFLINE statement. • You can also specify the redundancy attribute of the specified template. • You can also specify the stripping attribute of the specified template. Note: For each defined disk group, you can look at all defined attributes through the V$ASM_ATTRIBUTE fixed view.
Oracle Database 11g: New Features for Administrators 2 - 20
Using EM to Edit Disk Group Attributes EM provides a simple way to store and retrieve environment settings related to disk groups. You can now set the compatible attributes from both the create disk group page and the edit disk group advanced attributes page. The disk_repair_time attribute is added to the edit disk group advanced attributes page only. Note: For pre-11g ASM instances, the default ASM compatibility and client compatibility is 10.1. For 11g ASM instances, the default ASM compatibility is 11.1 and database compatibility is 10.1
Oracle Database 11g: New Features for Administrators 2 - 21
Enhanced Disk Group Checks
• Disk group check syntax is simplified – FILE and DISK options do the same as ALL
• Additional checks performed: – Alias – Directories ALTER DISKGROUP DATA CHECK;
Enhanced Disk Group Checks The CHECK disk group command is simplified to check all the metadata directories by default. The CHECK command lets you verify the internal consistency of ASM disk group metadata. ASM displays summary errors and writes the details of the detected errors in the alert log. In earlier releases, you could specify this clause for ALL, DISK, DISKS IN FAILGROUP, or FILE. Those clauses have been deprecated as they are no longer needed. In the current release, the CHECK keyword performs the following operations: • Check the consistency of the disk, equivalent to CHECK DISK and CHECK DISK IN FAILGROUP in previous releases. • Cross checks all the file extent maps and allocation tables for consistently, equivalent to CHECK FILE in previous releases. • Checks that the alias metadata directory and file directory are linked correctly. • Checks that the alias directory tree is linked correctly. • Checks that ASM metadata directories do not have unreachable allocated blocks. The REPAIR | NOREPAIR clause lets you instruct ASM whether or not to attempt to repair any errors found during the consistency check. The default is REPAIR. The NOREPAIR setting is useful if you want to be alerted to any inconsistencies but do not want ASM to take any automatic action to resolve them. Note: Introducing extra checks as part of check disk group does slow down the entire check disk group operation. Oracle Database 11g: New Features for Administrators 2 - 22
Restricted Mount Disk Group For Fast Rebalance
• Disk group can only be mounted on a single instance • No database clients or other ASM instance can get access • Rebalance can proceed without locking overhead
Restricted Mount Disk Group For Fast Rebalance A new mount mode to mount a disk group in Oracle Database 11g is called RESTRICTED. When a disk group is mounted in RESTRICTED mode, clients cannot access the files in a disk group. When ASM instance knows that there is no clients, it can improve the performance of the rebalance operation by not attempting to message clients for locking/unlocking extent maps. A disk group mounted in RESTRICTED mode is mounted exclusively on only one node and clients of ASM on that node cannot use that disk group. The RESTRICTED mode allows you to perform all maintenance tasks on a disk group in the ASM instance without any external interaction. At the end of the maintenance cycle you have to explicitly dismount the disk group and remount it in normal mode. The ALTER DISKROUP diskgroupname MOUNT command is extended to allow for ASM to mount the diskgroup in restricted mode. An example is shown on the slide. When you use RESTRICTED option to startup a ASM instance, all the disk groups defined in ASM_DISKGROUPS parameter are mounted in RESTRICTED mode. Note: The restricted mode is not allowed if the cluster is in rolling migration.
Oracle Database 11g: New Features for Administrators 2 - 23
Mount Force Disk Group • By default MOUNT is NOFORCE: – All disks must be available
• MOUNT with FORCE: – Offlines unavailable disks if quorum exists – Fails if all disks available ALTER DISKGROUP data MOUNT FORCE|[NOFORCE];
Mount Force Disk Group This feature alters the behavior of ASM when mounting an incomplete disk group. With Oracle Database 10g as long as there are enough failure groups to mount a disk group, the mount operation succeeds, even when there are potentially missing or damaged failure groups. This behavior has the potential to automatically drop ASM disks requiring adding them back again later when repaired, incurring a long rebalance operation. With Oracle Database 11g such an operation fails, unless you specify the new FORCE option when mounting the damaged disk group. This allows you to correct configuration errors like ASM_DISKSTRING set incorrectly, or connectivity issues before trying the mount again. However disk groups mounted with FORCE option could potentially have one or more disks offline if they were not available at the time of the mount. You must take corrective actions before DISK_REPAIR_TIME expires to restore those devices. Failing to online those devices would result in the disks being expelled from the disk group and costly rebalance needed to restore redundancy for all the files in the disk group. Also, if one or more devices are off lined as a result of MOUNT FORCE then some or all files will not be properly protected until the redundancy is restored in the disk group via rebalance. So, mount with FORCE is useful in cases where you know a priori that some of the disks belonging to a disk group are unavailable. The disk group mount will succeed if ASM finds enough disks to form a quorum.
Oracle Database 11g: New Features for Administrators 2 - 24
Mount Force Disk Group (Continued) Mount with NOFORCE is the default option of mount when none is specified. In the NOFORCE mode, all the disks that belong to a disk group must be accessible for the mount to succeed. Note: Specifying the FORCE option when it is not necessary will also result in an error. There is one special case in a cluster: if an ASM instance is not the first to mount the disk group, then MOUNT FORCE fails with an error if disks are determined to be inaccessible locally but accessible by another instance.
Oracle Database 11g: New Features for Administrators 2 - 25
Drop Force Disk Group
• Allows users to drop disk groups that cannot be mounted • Fails if disk group is mounted anywhere DROP DISKGROUP data FORCE INCLUDING CONTENTS;
Drop Force Disk Group Drop disk group force marks the headers of disks belonging to a disk group that cannot be mounted by the ASM instance as FORMER. However, the ASM instance first determines whether the disk group is being used by any other ASM instance using the same storage subsystem. If it is being used, and if the disk group is in the same cluster, or on the same node, then the statement fails. If the disk group is in a different cluster, then the system further checks to determine whether the disk group is mounted by any instance in the other cluster. If it is mounted elsewhere, then the statement fails. However, this latter check is not as definitive as the checks for disk groups in the same cluster. Therefore, use this clause with caution. Note: When executing the DROP DISKGROUP command with the FORCE option, you must also specify the INCLUDING CONTENTS clause.
Oracle Database 11g: New Features for Administrators 2 - 26
ASMCMD Extensions
User created directories Templates Disk group compatibility Disk group name Disk names and failure groups
ASMCMD Extensions • ASMCMD is extended to include ASM metadata backup and restore functionality. This provides the ability to recreate a pre-existing ASM disk group with exact same template and alias directory structure. Currently if an ASM disk group is lost, it is possible to restore the lost files using RMAN but you have to manually recreate ASM disk group and any required user directories/templates. There is no way to backup and restore ASM metadata. ASM metadata backup and restore (AMBR) works in two modes. In backup mode it parses ASM fixed tables and views to gather information about existing disks and failure group configurations, template and alias directory structure. It then dumps this metadata information to a text file. In restore mode, AMBR reads the previously generated file to reconstruct the disk group and its metadata. You have the possibility to control AMBR behavior in restore mode to do a full, nodg, or newdg restore. The difference between the three sub-modes is to whether you want to include the disk group creation or not, and change its characteristics. • The lsdsk command lists ASM disk information. This command can run in two modes: connected and non-connected. In connected mode, ASMCMD uses the V$ and GV$ views to retrieve disk information. In non-connected mode, ASMCMD scans disk headers to retrieve disk information, using an ASM disk string to restrict the discovery set. The connected mode is always attempted, first.
Oracle Database 11g: New Features for Administrators 2 - 27
ASMCMD Extensions (Continued) • Bad block repair is a new feature that runs automatically on normal or high redundancy disk groups. When a normal read from an ASM disk group fails with an I/O error, ASM attempts to repair that block by reading from the mirror copy and write to it and by relocating it if the copy failed to produce a good read. This whole process happens automatically only on blocks that are read. It is possible that some blocks and extents on an ASM disk group are seldom read. One prime example is the secondary extents. The ASMCMD repair command is design to trigger a read on these extents, so the resulting failure in I/O can start the automatic block repair process. One can use the ASMCMD repair interface if the storage array returns an error on a physical block, then the ASMCMD repair can initiate a read on that block to trigger the repair. Note: For more information about the syntax for each of these commands, refer to the Oracle Database Storage Administrator's Guide 11g Release 1 (11.1)
Oracle Database 11g: New Features for Administrators 2 - 28
ASMCMD Extension Examples 1
2
3
4
2 - 29
ASMCMD> md_backup –b jfv_backup_file -g data Disk group to be backed up: DATA# Current alias directory path: jfv ASMCMD>
Unintentional disk group drop ASMCMD> md_restore -b jfv_backup_file -t full -g data Disk group to be restored: DATA# ASMCMDAMBR-09358, Option -t newdg specified without any override options. Current Diskgroup being restored: DATA Diskgroup DATA created! User Alias directory +DATA/jfv created! ASMCMD>
ASMCMD Extension Examples This example describes how to backup ASM metadata using the md_backup command, and how to restore them using the md_restore command. The first statement specifies the –b option and the –g option of the command. This is to define the name of the generated file containing the backup information as well as the disk group that needs to be backed up: jfv_backup_file and data respectively in the above example. At step two, it is assumed that there is a problem on the DATA disk group, and as a result it gets dropped. Before you can restore the database files it contained, you have to restore the disk group itself. At step three, you initiate the disk group recreation as well as restoring its metadata using the md_restore command. Here, you specify the name of the backup file generated at step one, as well as the name of the disk group you want to restore, and also the type of restore you want to do. Here a full restore of the disk group is done because it no longer exist. Once the disk group is recreated, you can restore its database files using RMAN for example.
Oracle Database 11g: New Features for Administrators 2 - 29
Summary
In this lesson, you should have learned how to: • Setup ASM fast mirror resynch • Use ASM preferred mirror read • Setup ASM disk group attributes • Use SYSASM role • Use various new manageability options for CHECK, MOUNT, and DROP commands • Use the mb_backup, md_restore, and repair ASMCMD extensions
Oracle Database 11g: New Features for Administrators 3 - 2
SQL Performance Analyzer Overview New feature in 11g Targeted users: DBAs, QA and Application Developers Helps predict the impact of system changes on SQL workload response time Builds different versions of SQL workload performance (i.e., SQL execution plans and execution statistics) Executes SQL serially: concurrency is not respected Analyzes performance differences Offers fine-grained performance analysis on individual SQL Integrated with SQL Tuning Advisor to tune regressions
SQL Performance Analyzer Overview Oracle Database 11g introduces the SQL Performance Analyzer feature. SQL Performance Analyzer helps you forecast the impact of a potential change on the performance of a SQL query workload. This enables you to make changes in a test environment to determine if the workload performance will be improved through a database upgrade for example.
Oracle Database 11g: New Features for Administrators 3 - 3
SQL Performance Analyzer Use Cases
SQL Performance Analyzer is beneficial in the following use cases: • Database upgrades • Implementation of tuning recommendations • Schema changes • Statistics gathering • Database parameter changes • OS/hardware changes
SQL Performance Analyzer Use Cases SQL Performance Analyzer can be used to predict and prevent potential performance problems for any database environment change that affects the structure of SQL execution plans. The changes can include any of the following (but not limited to): • Database upgrades • Implementation of tuning recommendations • Schema changes • Statistics gathering • Database parameter changes • OS/ hardware changes DBAs can use SQL Performance Analyzer to foresee SQL performance changes induced through above changes, for even the most complex environments. As applications evolve through the development lifecycle, database application developers can test changes to schemas, database objects, rewritten applications for example, to mitigate any potential performance impact. SQL Performance Analyzer also allows for the comparison of SQL performance statistics.
Oracle Database 11g: New Features for Administrators 3 - 4
Usage Model (1): Capture SQL Workload
•
SQL Tuning Set (STS) used to store SQL workload. Includes: – – – –
Cursor cache Incremental capture
• Database Instance
•
SQL Text Bind variables Execution plans Execution statistics
Incremental capture used to populate STS from cursor cache over a time period SQL tuning set’s filtering and ranking capabilities filters out undesirable SQL
Oracle Database 11g: New Features for Administrators 3 - 5
Usage Model (2): Transport to a Test System
Cursor cache
Database Instance
Database Instance
Test Database
Production Database
• • •
3-6
Copy SQL tuning set to staging table (“pack”) Transport staging table to test system (datapump, db link, etc.) Copy SQL tuning set from staging table (“unpack”)
Database upgrade Implementation of tuning recommendations Schema changes Statistics gathering Database parameter changes OS/hardware changes, etc. Test-Execute SQL in SQL tuning set to generate SQL execution plans and statistics Explain plan SQL in SQL tuning set to generate SQL plans
Oracle Database 11g: New Features for Administrators 3 - 8
Usage Model (5): Compare and Analyze Performance •
•
Rely on user-specified metric to compare SQL performance: elapsed_time, buffer_gets, disk_reads, ... Calculate change impact on individual SQLs and SQL workload: – –
• • • • •
SQL Tuning Advisor
Overal impact on workload SQL Net Impact on workload
Use SQL execution frequency to define a weight of importance Detect improvements, regressions, and unchanged performance Detect changes in execution plans Recommend to run SQL tuning advisor to tune regressed SQLs Analysis results can be used to seed SQL Plan Management baselines 3-9
Oracle Database 11g: New Features for Administrators 3 - 9
SQL Performance Analyzer: Summary
1. 2. 3. 4. 5. 6. 7.
3 - 10
Capture SQL workload on production Transport the SQL workload to a test system Build “before-change” performance data Make changes Build “after-change” performance data Compare results from 3 and 5 Tune regressed SQL
SQL Performance Analyzer: Summary 1. Gather SQL: In this phase you collect the set of SQL statements that represent your SQL workload on the production system. You can use SQL tuning sets (STS) or the Automatic Workload Repository (AWR) to capture the information to transport. As AWR essentially captures high-load SQLs, you should consider modifying the default AWR snapshot settings and captured Top SQL to ensure AWR captures the maximum number of SQL statements. This would ensure a more complete SQL workload capture. 2. Transport: Here you transport the resultant workload to the test system. The STS is exported from the production system and the STS is imported into the test system. 3. Compute “before version” performance: Before any changes take place you execute the SQL statements, collecting baseline information needed to assess the impact a future change may have on the performance of the workload. The information collected in this stage represents a snapshot of the current state of the system workload. The performance data includes: • Execution plans: Execution plans, generated by explain plan for example. • Execution statistics: Includes elapsed time, buffer gets, disk reads, rows processed for example. 4. Make a change: Once you have the before version data, you can then implement your planned change and start viewing the impact on performance.
Oracle Database 11g: New Features for Administrators 3 - 10
SQL Performance Analyzer: Summary (Continued) 5. Compute “after version” performance: This step takes place after the change is made in the database environment. Each statement of the SQL workload runs under a mock execution— collecting statistics only—collecting the same information as captured at step 3. 6. Compare and analyze SQL Performance: Once you have both versions of the SQL workload performance data, you can now perform the performance analysis comparing the after version data with the before version data. The comparison is based on the execution statistics, such as elapsed time, CPU time and buffer gets. 7. Tune regressed SQL: At this stage, you have identified exactly which SQL statements may cause performance problems when the database change is made. From here, you can use any of the database tools to tune the system. For example, you could use the SQL Tuning Advisor or Access Advisor against the identified statements, and implement those recommendations. Or alternatively, you can seed SPM with plans captured at step 3 to guarantee the plans remain the same. Once you implement any tuning action you should then repeat the process to create a new “after version” and analyze the performance differences to ensure the new performance is acceptable.
Oracle Database 11g: New Features for Administrators 3 - 11
Enterprise Manager: Capturing the SQL Workload Enterprise Manager (EM) launches a wizard allowing you to create a new SQL Tuning Set. This wizard allows you to select a load method and a data source, to specify filter conditions for selecting SQL statements that are to be loaded into the new SQL Tuning Set, and to schedule it as a job to be executed at a particular time. You access the SQL Tuning Sets page from the Performance tab in Database Control Oracle Database 11g. From this page you can create a STS. The workload you capture should reflect a representative period of time (in captured SQL statements) that you wish to test under some changed condition. The following information is captured in this process: • The SQL text • The execution context including bind values, parsing schema, and compilation environment which contains a set of initialization parameters under which the statement is executed. • The execution frequency which tells how many times the SQL statement has been executed during the time interval of the workload. Normally the capture SQL happens on the production system to capture the workload running on the production system. The performance data is computed later on the test system by the compute SQL performance processes. The SQL Performance Analyzer tracks the SQL performance of the same STS before and after a change is made to the database.
Oracle Database 11g: New Features for Administrators 3 - 12
Capturing the SQL Workload The Create STS Tuning Set wizard provided in EM in Oracle Database 11g guides you through the capturing of SQL statements. From the Options you define the attributes needed to create a new SQL Tuning Set. You can decide whether to collect any SQL statements at this time, or not. If you choose not to, the wizard jumps to the Review page.
Oracle Database 11g: New Features for Administrators 3 - 13
Capturing the SQL Workload You choose to either incrementally collect SQL workload over a period of time, or to collect SQL statements one time only. You can choose to capture the SQL statements from the following sources: • Cursor Cache • AWR Snapshots • AWR Baselines • User-defined Workload: A user-defined table that stores SQL statements. Must have sql_text and parsing_schema_name columns. Ideally, it should also have columns that contain SQL statistics. EM provides the following support for SQL Performance Analyzer: • View previously captured workloads and their details. • Capture the SQL. • Export a workload. • Import a workload. • Compute SQL performance. • Manage SQL performance data. • Report analysis result. • Run SQL Advisor to tune regressed SQL statements. • View previously executed SQL Performance Analyzer tasks and their results. Oracle Database 11g: New Features for Administrators 3 - 14
Capturing the SQL Workload Creating a SQL Tuning Set:
Capturing the SQL Workload You are then able to create filters against the type of SQL conditions for capture. In the above example the schema APPS, SELECT SQL statements and APPS_DEMO module are selected for capture from the cursor cache. The actual filter options will depend on the selected load methods. The final stage in the wizard allows you to select a job schedule time (IMMEDIATE, LATER), review your job options and submit the job.
Oracle Database 11g: New Features for Administrators 3 - 15
Exporting the SQL Workload From this page you can choose to export the selected STS for transport to the test system. You can also drill down and see the SQL statements contained within the selected STS. You also use this page to import a STS from a previously exported file. This is how you would load a STS on the test system for comparison purposes.
Oracle Database 11g: New Features for Administrators 3 - 16
Creating a SQL Performance Analyzer Task EM helps you manage each component in the SQL Performance Analyzer process and reports the analysis result.The workflow and user interface applies to both EM Database Control and EM Grid Control. You access SQL Performance Analyzer from the Software and Support tab of Database Control, or Database Instance > Advisor Central > SQL Performance Analyzer. This takes you to the screen to create the necessary tasks to capture the before, after, and tuned performance data.
Oracle Database 11g: New Features for Administrators 3 - 17
SQL Performance Analyzer Task Flow Once you have selected the Create SQL Replay Task button you are presented with a wizard to walk you through the steps of performing a SQL Performance Analyzer task. Each of these steps is sequential and you cannot alter the settings after the steps completion. The steps involved are as follows: 1. Select SQL Tuning Set - This phase allows you to select the desired imported STS. 2. Establish Initial Environment - Depending on the change you are testing, you need to establish the initial environment on the current system. This could involve changing initialization parameters for example. 3. Collect SQL Performance Before Change - This step involves running the captured workload under the initial environment while capturing the performance statistics. 4. Make Change - Here you are asked to confirm that you have made the necessary SQL changes that you are wishing to test with the supplied SQL workload. 5. Collect SQL Performance After Change - You now rerun the STS against the post-change environment, collecting again the performance statistics. 6. Compare SQL Performance Before and After Change - It is at this stage that actual comparison statistics are generated. Oracle Database 11g: New Features for Administrators 3 - 18
SQL Performance Analyzer Task Flow You select the desired metrics you wish to base your comparison on and the schedule to submit the job (IMMEDIATELY, LATER). Once this step has completed successfully, you click the View Analyze Result button to view the results of the comparison.
Oracle Database 11g: New Features for Administrators 3 - 19
Viewing Analysis Results The View Analyze Result button displays the above charts. Here you can see the before and after graphical representation of the SQL workload for the selected comparison metric. You can drill down on the improved, regressed and overall impact SQL statements for further illustration. From many of these screens you can choose to run the SQL Tuning Advisor.
Oracle Database 11g: New Features for Administrators 3 - 20
Viewing Analysis Results When you click the SQL ID associated with the calculated improved statement you are presented with a more detailed illustration of the SQL.
Oracle Database 11g: New Features for Administrators 3 - 21
Viewing Analysis Results The detailed page gives you execution statistics for the selected SQL statement. Using the scrollable windows at the bottom of the screen, you can view the plan table of the SQL statement before and after the proposed change. You can also view the actual SQL text.
Oracle Database 11g: New Features for Administrators 3 - 22
Viewing Analysis Results On the regressed SQL page, you have additional information messages on the problem, symptom, and informational findings. Throughout these screens you can choose to run the SQL Tuning Advisor to make tuning recommendations.
Oracle Database 11g: New Features for Administrators 3 - 23
Viewing Tuning Results When you click the Schedule SQL Tuning Advisor button, you complete the job name and optional job details and submit the job as required (IMMEDIATE, LATER). On successful job completion you go to the Advisor Central page and drill down to view the recommendations of the run.
Oracle Database 11g: New Features for Administrators 3 - 24
Viewing Tuning Results You can view the detail of the suggested statement improvements by selecting a specific type. The improvement identified by the SQL Tuning Advisor as having the highest benefit (percentage) is presented at the top of the list. You are recommended to implement only one change at a time and repeat the analysis process, capturing ‘after tuning’ performance data and re-analyzing the recommendations against the ‘after change’ performance data.
Oracle Database 11g: New Features for Administrators 3 - 25
SQL Performance Analyzer: PL/SQL Packages
• PL/SQL package: – DBMS_SQLTUNE
• Main APIs: – CREATE_TUNING_TASK:Creates an advisor task – EXECUTE_TUNING_TASK:Executes a previously created tuning task – REPORT_TUNING_TASK:Displays the results of a tuning task
Oracle Database 11g: New Features for Administrators 3 - 26
SQL Performance Analyzer: Data Dictionary Views • Modified views in Oracle Database 11g: – DBA{USER}_ADVISOR_TASKS:Displays details about the advisor task – DBA{USER}_ADVISOR_FINDINGS:Displays analysis findings
• New views in Oracle Database 11g: – DBA{USER}_ADVISOR_EXECUTIONS:Lists metadata information for a task execution – DBA{USER}_ADVISOR_SQLPLANS: Displays the list of SQL execution plans – DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics
SQL Performance Analyzer: Data Dictionary Views DBA{USER}_ADVISOR_SQLPLANS: Displays the list of all SQL execution plans, or those owned by the current user. DBA{USER}_ADVISOR_SQLSTATS: Displays the list of SQL compilation and execution statistics, or those owned by the current user. DBA{USER}_ADVISOR_TASKS: Displays details about the advisor task created to perform a an impact analysis of a system environment change. DBA{USER}_ADVISOR_EXECUTIONS:Lists metadata information for a task execution. SQL Performance Analyzer analyzer creates a minimum three executions to perform a change impact analysis on a SQL workload. A first execution that collects performance data for the before change version of the workload, a second execution of the after change version of the workload and a final execution to perform the actual analysis. DBA{USER}_ADVISOR_FINDINGS:Displays analysis findings. The advisor generates four types of findings: performance regression, symptoms, errors and informative messages.
Oracle Database 11g: New Features for Administrators 3 - 27
Summary
In this lesson, you should have learned how to: • •
3 - 28
Understand what is SQL Performance Analyzer Use SQL Performance Analyzer
Oracle Database 11g: New Features for Administrators 3 - 28
Practice 3: Overview
This practice covers the following topics: • Capture SQL Tuning Sets • Migrate STS from Oracle Database 10g to Oracle Database 11g • Use SQL Performance Analyzer in an upgrade scenario • Use SQL Performance Analyzer in a change scenario
Oracle Database 11g: New Features for Administrators 4 - 2
SQL Plan Management Overview
• SQL Plan Management is automatic controlled SQL plan evolution • Optimizer automatically manages SQL Plan Baselines – Only known and verified plans are used
• Plan changes are automatically verified – Only comparable or better plans are used going forward
• Can pre-seed critical SQL with STS from SQL Performance Analyzer
SQL Plan Management Overview Potential performance risk occurs when the SQL execution plan changes for a SQL statement. A SQL plan change can occur due to a variety of reasons like optimizer version, optimizer statistics, optimizer parameters, schema definitions, system settings, as well as SQL profile creation. Various plan control techniques like stored outlines and SQL profiles have been introduced in past Oracle versions to address the performance regressions due to plan changes. However, these techniques are reactive processes that require manual intervention. SQL Plan Management is a new feature introduced with Oracle Database 11g that allows the system to automatically control SQL plan evolution by maintaining what is called SQL plan baselines. With this feature enabled, a newly generated SQL plan can only integrate a SQL plan baseline if it has been proven that doing so will not generate performance regression. So, during execution of a SQL statement, only a SQL plan part of the corresponding SQL plan baseline can be used. As described later in this lesson, SQL plan baselines can be automatically loaded or can be seeded using SQL Tuning Sets. Various possible scenarios are studied later in this lesson. The main benefit of the SQL Plan Management feature is the performance stability of the system through the avoidance of plan regressions. Additionally, it saves the DBA time spent in identifying and analyzing SQL performance regressions and finding workable solutions.
Oracle Database 11g: New Features for Administrators 4 - 3
SQL Plan Manageability Overview
You can play the following mini lesson to better understand SQL Plan Manageability: SQL Plan Manageability Overview (See URL in notes)
ASM Fast Disk Resync Overview To better understand the following slides, you can spend some time playing the following mini lesson at: http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20CurriculumPublic/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR 1_Mini_Lessons/11gR1_Beta1_OPM_JFV/11gR1_Beta1_OPM4_viewlet_swf.html
Oracle Database 11g: New Features for Administrators 4 - 4
SQL Plan Baseline Architecture SQL Pan Management (SPM) feature introduces necessary infrastructure and services in support of plan maintenance and performance verification of new plans. For this, the optimizer maintains a history of plans for individual SQL statements for SQL statements that are executed more than once. The optimizer recognizes a repeatable SQL statement by maintaining a statement log. A SQL statement is recognized as repeatable when it is parsed or executed again after it has been logged. Once a SQL statement is recognized as repeatable, from then on various plans generated by the optimizer are maintained as a plan history which contains relevant information used by the optimizer to reproduce an execution plan like SQL text, outline, bind variables, and compilation environment. As an alternative or as a complement to the automatic recognition of repeatable SQL statements and the creation of their plan history, manual seeding of plans for a set of SQL statements is also supported. A plan history contains different plans generated by the optimizer for a SQL statement over time. However, only some of the plans in the plan history may be accepted for use. For example, a brand new plan generated by the optimizer will not be normally used until it has been verified not to cause a performance regression. Out-of-the-box the plan verification is done as part of Automatic SQL Tuning that is run as an automated task in a maintenance window.
Oracle Database 11g: New Features for Administrators 4 - 5
SQL Plan Baseline Architecture (Continued) Automatic SQL Tuning task targets only the high-load SQL statements and for them, it automatically implements actions such as making a successfully verified plan an accepted plan. A set of acceptable plans constitutes a SQL plan baseline. The very first plan generated for a SQL statement is obviously acceptable for use, and therefore, it forms the original plan baseline. Any new plan subsequently found by the optimizer are part of the plan history but not part of the plan baseline initially. The statement log, the plan history and plan baselines are stored in the SQL Management Base (SMB) which also contains SQL Profiles. The SMB is part of the database dictionary, and is stored in the SYSAUX tablespace. The SMB has automatic space management, such as periodic purging of unused plans. You can configure the SMB to change plan retention policy and set space size limit. Note: With Oracle Database 11g if the database instance is up but SYSAUX tablespace is OFFLINE then the optimizer will not be able to access SQL management objects and hence this can impact the performance on some of the SQL workload.
Oracle Database 11g: New Features for Administrators 4 - 6
Loading SQL Plan Baseline optimizer_capture_plan_baselines=true
Loading Plan Baseline You basically have two possibilities to load SQL plan baselines: On the fly capture • The first one is to use automatic plan capture by setting the initialization parameter OPTIMIZER_CAPTURE_PLAN_BASELINES to TRUE. This parameter is set to FALSE by default. Setting it to TRUE turns on automatic recognition of repeatable SQL statements, and automatic creation of plan history for such statements. This is illustrated on the left part of the graphic where you can see the first generated SQL plan automatically integrated to the original SQL plan baseline. Bulk loading • The second loading mechanism uses the DBMS_SPM package. This package allows you to manually manage SQL plan baselines. With this package, you can load SQL plans into a SQL plan baseline directly from the cursor cache, or from an existing SQL Tuning SET (STS). For a SQL statement to be loaded into a SQL plan baseline from an STS, that SQL statement needs to store its SQL plan in the STS. DBMS_SPM allows you to change the status of a baseline plan from accepted to not accepted and vica versa. It also allows you to export baseline plans from a staging table which can then be used to load SQL plan baselines on other databases.
Oracle Database 11g: New Features for Administrators 4 - 7
Important Baseline SQL Plan Attributes Plan History
Important Baseline SQL Plan Attributes When a plan enters the plan history, it is associated with a number of important attributes: • SIGNATURE, SQL_HANDLE, SQL_TEXT, and PLAN_NAME are important identifier for search operations. • ORIGIN allows you to determine if the plan was automatically captured (AUTOCAPTURE), or manually inserted into the plan history (MANUAL). • ENABLED and ACCEPTED: ENABLED means that the plan is enabled for use by the optimizer. If ENABLED is not set, the plan will not be considered. ACCEPTED means that the plan was validated as a good plan, either automatically by the system or by the user manually changing it to ACCEPTED. Once a plan changes to ACCEPTED, it will only become not ACCEPTED if someone use DBMS_SPM.ALTER_SQL_PLAN_BASELINE() to change its status. An ACCEPTED plan can be temporarily disabled by removing the ENABLED setting. A plan has to be ENABLED and ACCEPTED for the optimizer to consider using it. • FIXED means that the optimizer will only consider those plans, not other plans. For example, if you have ten baseline plans and three of them are marked FIXED, the optimizer will only use the best plan from these three, ignoring all the others.
Oracle Database 11g: New Features for Administrators 4 - 8
Important Baseline SQL Plan Attributes (Continued) You can look at each plan’s attributes using the DBA_SQL_PLAN_BASELINES view as shown on the slide. You can then change some of them using the DBMS_SPM.ALTER_SQL_PLAN_BASELINE function. You also have the possibility to remove plans or complete plan history using the DBMS_SPM.PURGE_SQL_PLAN_BASELINE function. The example shown on the slide change the ACCEPTED attribute of the SYS_SQL_PLAN_8DFC352F359901EA to YES, making it ACCEPTED and thus part of the baseline. Note: The DBA_SQL_PLAN_BASELINES view contains additional attributes that allows you to determine when each plan was last used, and whether a plan should be automatically purged.
Oracle Database 11g: New Features for Administrators 4 - 9
SQL Plan Selection dbms_xplan.display_plan_baseline
SQL Plan Selection If you are using automatic plan capture, the first time a SQL statement is recognized as repeatable, it’s best-cost plan is added to the corresponding SQL plan baseline, and that plan is used to execute the statement. The optimizer uses a comparative plan selection policy when a plan baseline exists for a SQL statement and the initialization parameter OPTIMIZER_USE_PLAN_BASELINES is set to TRUE (default value). Each time a SQL statement is compiled, the optimizer first uses the traditional cost-based search method to build a best-cost plan. Then it tries to find a matching plan in the SQL plan baseline. If a match is found then it proceeds as usual. Otherwise, it first adds the new plan to the plan history, and then it costs each of the accepted plans in the SQL plan baseline and picks the one with the lowest cost. The accepted plans are reproduced using the outline that is stored with each of them. So, the effect of having a SQL plan baseline for a SQL statement is that the optimizer always selects one of the accepted plans in that SQL plan baseline. With SQL Plan Management feature the optimizer can produce a plan that could be either a bestcost plan or a baseline plan. This information is dumped in the other_xml column of the plan_table upon explain plan. In addition, using the new dbms_xplain.display_plan_baseline function, you can display one or more execution plans for the specified sql_handle of a plan baseline. If plan_name is also specified then the corresponding execution plan is displayed.
Oracle Database 11g: New Features for Administrators 4 - 10
SQL Plan Selection (Continued) Note: To preserve backward compatibility, if a stored outline for a SQL statement is active for the user session, the statement is compiled using the stored outline. In addition, a plan generated by the optimizer using a stored outline is not stored in the SMB even if automatic plan capture has been enabled for the session.
Oracle Database 11g: New Features for Administrators 4 - 11
Possible SQL Plan Manageability Scenarios Database Upgrade
New Application Deployment
Oracle Database 11g Plan History Ba se an lin Pl GB e
Possible SQL Plan Manageability Scenarios Database upgrade: • Bulk SQL plan loading is especially useful when the system is being upgraded from a preOracle Database 11g version to Oracle Database 11g. For this, you can capture plans for a SQL workload into a SQL Tuning Set (STS) before the upgrade, and then load these plans from the STS into the SQL plan baseline immediately after the upgrade. This strategy can minimize plan regressions resulting from the use of the new optimizer version. New Application Deployment: • The deployment of a new application module means then introduction of brand new SQL statements into the system. The software vendor can ship the application software along with appropriate SQL plan baselines for new SQL being introduced. Because of the plan baselines the new SQL statements will initially run with the plans that are known to give good performance under a standard test configuration. However, if the customer system configuration is very different from the test configuration, the plan baselines can be evolved over time to produce better performance. For both cases, after manual loading, you can use the automatic SQL plan capture to make sure that going forward, only better plans will be used for your applications.
Oracle Database 11g: New Features for Administrators 4 - 12
SQL Performance Analyzer and SQL Plan Baseline Scenario Before change
Oracle Database 11g O_F_E=10 Plan History Ba se an lin Pl GB e HJ HJ
SQL Performance Analyzer and SQL Plan Baseline Scenario A variation of the first method described in the previous slide is through the use of SQL Performance Analyzer. You can capture pre-Oracle Database 11g plans in a SQL tuning set (STS) and import them into the Oracle database 11g. Then set the initialization parameter optimizer_features_enable to 10, this is make the optimizer behave as if this was a 10g oracle database. Next run SQL Performance Analyzer for the STS. Once that’s complete set the initialization parameter optimizer_features_enable back to 11 and rerun SQL Performance Analyzer for the STS. SQL Performance Analyzer will produce a report that will list an SQL statement who’s plan have regressed from 10g to 11. For those SQL statements that are shown by SQL Performance Analyzer to incur performance regression due to the new optimizer version, you can capture their plans using an STS and then load them into the SMB. This method represents best form of plan seeding process because it helps in preventing performance regressions while preserving performance improvements upon database upgrade.
Oracle Database 11g: New Features for Administrators 4 - 13
SQL Performance Analyzer Overview
You can play the following mini lesson to better understand SQL Performance Analyzer: SQL Performance Analyzer Overview (See URL in notes)
You can also refer to the SQL Performance Analyzer eStudy for more information
ASM Fast Disk Resync Overview To better understand the SQL Performance Analyzer feature, you can spend some time playing the following mini lesson at: http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20CurriculumPublic/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR 1_Mini_Lessons/11gR1_Beta1_SQL_Replay_JFV/11gR1_Beta1_SQL_Replay_viewlet_swf.html This mini lesson is best viewed on a 19” screen with all browser’s tool bars removed. In addition, you can refer to the SQL Performance Analyzer lesson for more information.
Oracle Database 11g: New Features for Administrators 4 - 14
Auto Load SQL Plan Baseline Scenario Oracle Database 11g
Auto Load SQL Plan Baseline Scenario Another possibility for the upgrade scenario is to use the automatic SQL plan capture mechanism. In this case, you setting the initialization parameter OPTIMIZER_FEATURES_ENABLE (OFE) to the pre-Oracle Database 11g version value for an initial period of time like a quarter and you execute your workload after upgrade using automatic SQL plan capture. During this initial time period, because of the OFE parameter setting, the optimizer is able to reproduce pre-Oracle Database 11g plans for a majority of the SQL statements. Because automatic SQL plan capture is also enabled during this period, the pre-Oracle Database 11g plans produced by the optimizer are captured as SQL plan baselines. After the initial time period ends, you can remove the setting of OFE to take advantage of the new optimizer version while incurring minimal or no plan regressions due to the plan baselines.
Oracle Database 11g: New Features for Administrators 4 - 15
Purging Policy The space occupied by the SQL Management Base is regularly checked against a defined limit. A limit based on the percentage size of SYSAUX tablespace is defined. By default, the space budget limit for the SMB is set to ten percent of SYSAUX size. However, you can configure SMB and change the space budget to a value between one percent and 50 percent. A daily task measures the total space occupied by SMB, and when it exceeds the defined percent limit, it generates a warning and writes it to the alert log. The alerts are generated daily until either the SMB space limit is increased, the size of SYSAUX is increased, or the size of SMB is decreased by purging some of the SQL management objects like SQL plan baselines or SQL profiles. The space management of SQL plan baselines is done proactively using regularly scheduled purging task. The task runs as an automated task in the maintenance window. Any plan that has not been used for more than 53 weeks is purged. However, you can configure SMB and change the unused plan retention period to a value between five and 523 weeks (little more than 10 years). In addition, you can also manually purge the SMB using the DBMS_SPM.PURGE_SQL_PLAN_BASELINE function as shown in the example.
Oracle Database 11g: New Features for Administrators 4 - 16
Oracle Database 11g: New Features for Administrators 5 - 2
Why Use Database Replay? • • • •
System changes like hardware/software upgrades are fact of life Customers want to identify full impact of change before going live Extensive testing and validation can be expensive – time and money Despite expensive testing success rate low – Many issues go undetected – Changes can impact system availability and performance negatively
•
Cause of low success rate – Inability to properly test with real world production workloads – many issues go undetected
Oracle Database 11g: New Features for Administrators 5 - 3
Database Replay • • •
Recreate actual production database workload in test environment Identify, analyze and fix potential instabilities before making changes to production Capture Workload in Production – Capture full production workload with real load & concurrency – Move the captured workload to test system
•
Replay Workload in Test – Make the desired changes in test system – Replay workload with production load & concurrency – Honor commit ordering
Database Replay Oracle Database 11g provides two specific solutions to these aforementioned challenges. Database Replay allows you to test the impact of a system change by replaying real-world workload on the test system before it is exposed to a production system. The production workload (including transaction concurrency and dependency) of the database server is recorded over an illustrative period of time (for example a peak period). This recorded data is used to replay the workload on a test system that has been appropriately configured. You gain a high degree of confidence in the overall success of the database change by subjecting the database server in a test system to a workload that is practically indistinguishable from a production workload.
Oracle Database 11g: New Features for Administrators 5 - 4
System Architecture: Capture Here you see an illustration of a system that is being recorded. You should always record a workload that spans an “interesting” period in a production system. Typically, the replay of the recording is used to determine whether it is safe to upgrade to a new version of the RDBMS server. During recording, special recording infrastructure built into the RDBMS records data about all external client requests, while the production workload is running on the system. External requests are any SQL queries, PLSQL blocks, PLSQL remote procedure calls, DML statements, DDL statements, Object Navigation requests and OCI calls. Background jobs and in general all internal clients continue their work during recording, without being recorded. The end product is the workload recording contains all necessary information for replaying the workload as seen by the RDBMS in the form of external requests. The recording infrastructure imposes minimal performance overhead (extra CPU, memory and I/O) on the recording system. You should however plan to accommodate the additional disk space needed for the actual workload recording. RAC Note: Instances in a RAC environment have access to the common database files. However, they do not need to share a common general-purpose file system. In such an environment, the workload recording is written on each instance’s file system during recording. For processing and replay all the parts of the workload recording need to be manually copied into a single directory.
Oracle Database 11g: New Features for Administrators 5 - 5
System Architecture Replay The workload recording is consumed by a special application called the replay driver, which sends requests to the RDBMS on which the workload is replayed. The RDBMS on which the workload is replayed is usually a test system. It is assumed that the database of the replay system is suitable for the replay of the workload that was recorded. The internal RDBMS clients are not replayed. The replay driver is a special client that consumes the Workload Recording and sends appropriate requests to the test system to make it behave as if the external requests were sent by the clients used during the recording of the workload (see previous example). The use of a special driver that acts as the sole external client to the RDBMS allows for the record and replay infrastructure to be client agnostic. The replay driver consists of one or more clients that connect to the replay system and sends requests based on the workload capture. The replay driver equally distributes the workload capture streams among all the replay clients based on network bandwidth, CPU and memory capability.
Oracle Database 11g: New Features for Administrators 5 - 7
The Big Picture The significant benefit with Oracle Database 11g managing of system changes is the added confidence to the business in the success of performing the change. The record and replay functionality offers confidence in the ease of upgrade during a database server upgrade. A useful application of Database Replay is to test the performance of a new server configuration. Consider a customer that is utilizing a single instance database and wants to move to a Real Application Clusters (RAC) setup. The customer records the workload of an interesting period and then sets up a RAC test system for replay. During replay the customer is able to monitor the performance benefit of the new configuration by comparing the performance to the recorded system. This can also help convince customers to move to a RAC configuration once shown the benefits using the database replay functionality. Another application is debugging. You can record and replay sessions emulating an environment to make bugs more reproducible. Manageability feature testing is another benefit. Self-managing and self-healing systems need to implement this advice automatically(“autonomic computing model”). Multiple replay iterations allow testing and fine-tuning of the control strategies’ effectiveness and stability. Many Oracle customers have expressed vigorous interest in this change assurance functionality. The database administrator, or a user with special privileges granted by the DBA, initiates the record and replay cycle and has full control of the entire procedure.
Oracle Database 11g: New Features for Administrators 5 - 8
Pre-Change Production System Changes not supported Clients/App servers
Supported changes: • Database Upgrades, Patches • Schema, Parameters • RAC nodes, Interconnect • OS Platforms, OS Upgrades • CPU, Memory • Storage • Etc.
Pre-Change Production System Database Replay focuses on recording and replaying of the workload that is directed to the RDBMS. Therefore recording of the workload is done at the point indicated in the above diagram. Recording at the RDBMS within the software stack makes it possible to exchange anything below this level and test the new setup using the record and replay functionality. While replaying the workload, the RDBMS performs the actions observed during recording. In other words, during the replay phase the RDBMS code is exercised in a very similar way as it was exercised during the recording phase. This is achieved by recreating all external client requests to the RDBMS. External client requests include all the requests by all possible external clients of the RDBMS.
Oracle Database 11g: New Features for Administrators 5 - 9
Workloads Supported •
Supported – – – – – –
•
Limitations – – – – – –
5 - 10
All SQL (DML, DDL, PLSQL) with practically all types of binds Full LOB functionality (Cursor based and direct OCI) Local transactions Login/Logoffs Session switching Limited PLSQL RPCs Direct path load, import/export OCI based object navigation (ADTs) and REF binds Streams, non-PLSQL based AQ Distributed txns, remote describe/commit operations Flashback Shared Server
Oracle Database 11g: New Features for Administrators 5 - 10
Capture Considerations Planning • Adequate disk space for captured workload (binary files) • Database Restart –
Only way to guarantee authentic replay — —
–
•
Startup restrict Capture will un-restrict
May not be necessary depending on the workload
Means to restore database for Replay purposes – – –
physical restore (scn/time provided) logical restore of application data flashback/snapshot-standby
• Filters can be specified to capture subset of the workload • SYSDBA or SYSOPER privileges and appropriate OS privileges Overhead • Performance Overhead for TPCC is 4.5% • Memory Overhead : 64k/session • Disk space 5 - 11
Oracle Database 11g: New Features for Administrators 5 - 11
Replay Considerations
• Preprocess captured workload – One time action – On same DB version as replay – Can be performed anywhere (production, test system or other system) as long as versions match
• Restore Database then perform the change – – – – –
5 - 12
Upgrade Schema changes OS Change Hardware Change Add Instance
Replay Analysis There may be some divergence of the replay relative to what was recorded. For example when replaying on a newer version of the RDBMS a new algorithm may cause specific requests to be faster, and so divergence appears as a faster execution. This is considered a desirable divergence. Another example of a divergence is when a SQL statement returns fewer rows during replay-time than those returned during record-time. This is clearly non-desirable and its root cause may be some new index look-up algorithm. The replay will identify this fact. For data divergence the result of an action can be considered as: • The result set of SQL query. • An update to persistent database state. • A return code or an error code Performance divergence is useful to determine how new algorithms introduced in the replay system may affect overall performance. There are numerous factors that can cause replay divergence. While some of them cannot be controlled, others can be mitigated. It is the task of the DBA to understand the workload runtime operations and take the necessary actions to reduce the level of record and replay divergence. The two types of divergence reporting are listed above. Online divergence should aid the decision to stop a replay that has diverged significantly. The results of the replay before the divergence may still be useful, but further replay would not produce reliable conclusions. Offline divergence reporting is used to determine how successful the replay was after the replay has finished. Oracle Database 11g: New Features for Administrators 5 - 15
Replay Technology • Functionally equivalent replay – independent of clients/protocols in original setup
• Server controlled replay – scalable architecture: use an arbitrary number of replay clients – multiple multi-threaded replay clients to drive workload
• Commit based synchronization • Automatic Remapping of physical locators – rowids – LOB locators – cursor numbers
• Preservation of IDs during replay – sequences – GUIDs 5 - 16
Oracle Database 11g: New Features for Administrators 5 - 16
Replay Data Divergence
Workload Characteristics that increase data/error divergence • Implicit session dependencies by the Application (eg: use of dbms_pipe) • Extensive use of multiple commits within PLSQL • User Locks • Use of non-repeatable functions, system dependent data. • External interactions via urls, dblinks
Using Enterprise Manager Capture Workflow Here you see the workflow for Database Replay. As shown, the workload capture and preprocessing need to be done only once. The data produced can be used for workload replay multiple times. The workload recording has three main steps: 1. Planning for capture 2. Preparing for capture 3. Capturing the workload These steps are discussed in more detail in the following slides.
Oracle Database 11g: New Features for Administrators 5 - 18
Using Enterprise Manager for Workload Capture Enterprise Manager (EM) provides you with a user interface to manage each component in the Database Replay process. The workflow and user interface applies to both EM Database Control and EM Grid Control. You access Database Replay from the Software and Support tab of Database Control. You are then directed to the screen to create the necessary tasks to perform the following: • Manage the workload capture operations. • View any previously captured workload. • Manage the workload replay operations. • Stop the active capture or replay. - This option is only available during an active capture or replay session.
Oracle Database 11g: New Features for Administrators 5 - 19
Using Enterprise Manager for Workload Capture The EM wizard walks you through the pre-checks before beginning the database workload capture. You are first asked to confirm that you have a valid backup strategy and that there is sufficient disk space to hold the generated workload and meta data. You are then asked to set up any required capture filters to customize what data is captured (or filtered out of the captured data). Since EM is expected to be used to monitor and administer the recording and replaying sessions (essentially duplicating its workload during replay) EM provides a default filter to filter itself out. You can add additional filtering components. You should select the capture period based on the application and the peak periods. You can use existing manageability features such as Automatic Workload Repository (AWR) and Active Session History (ASH) to select an appropriate period based on workload history. At this stage you can optionally select to restart the database prior to the commencement of the capture process. If you know your workload well you can choose to not restart the database. Not restarting the database allows in-flight transactions to be present during the capture phase, thus impacting the potential for data divergence in the replay phase. Oracle recommends you restart the database to minimize the data divergence in the replay phase.
Oracle Database 11g: New Features for Administrators 5 - 20
Using Enterprise Manager for Workload Capture You must specify the location for the workload capture data. You can specify an existing database directory, or choose to create a directory from this screen. You will then be prompted for a directory object name and OS path which will be validated. You should ensure ample disk space exists to hold the captured workload as the recording stops if there is insufficient disk space. However, everything captured up to that point is usable for replay. RAC Note: For RAC, the DBA should define the directory for captured data at a storage location accessible by all instances. Otherwise the workload capture data needs to be copied from all locations to a single location before starting the processing of the workload recording.
Oracle Database 11g: New Features for Administrators 5 - 21
Using Enterprise Manager for Workload Capture You complete the schedule information to submit a capture job (IMMEDIATE, LATER) and then review the job information before submitting the job. Monitoring of the capture shows you the progress and resource usage. Since typically workload capture is done on a production system with a heavy workload, monitoring during the capture phase is lightweight and adds only minimal overhead to the production workload. The monitor data is accessible through V$ views.
Oracle Database 11g: New Features for Administrators 5 - 22
Using Enterprise Manager Replay Workflow Workload Replay Process Capture Replay Files & Metadata Raw Captured Data (from production system)
Using Enterprise Manager Replay Workflow The workload replay has four steps: 1. Initializing Replay Data 2. Preparing for Replay 3. Replay 4. Replay Analysis These steps are discussed in more detail in the following slides.
Oracle Database 11g: New Features for Administrators 5 - 23
Using Enterprise Manager for Workload Replay You begin the replay workflow by specifying the directory object where you stored the captured data. Once specified, the above screen is displayed with the Capture Summary of the selected workflow. Select Preprocess Workload to commence the prepare phase. After the database has restarted in restricted mode, you begin the capture phase by calling the DBMS_WORKLOAD_CAPTURE package with the following arguments: • A name for the capture. This allows reference of historical captured data on the capture system. • A directory object pointing to the directory that exists to store the captured workload data. • The time duration T for capture. This stops recording approximately after time T. • The filtering mode. • The restart mode. When you execute the FINISH_CAPTURE procedure the capture stops, the database system flushes the capture buffers, and closes all the open workload data files. After finishing the recording, you can request a report on the capture. This is used for comparison with the report generated through the Replay Phases . RAC Note: When an instance goes down during capture of a RAC system, the capture continues normally and is not aborted. The sessions that died as a result of an instance going down will be replayed up the point at which the instance died. When a dead instance is repaired and comes up again during capture, all new sessions are recorded normally. During replay the death of instances is not replayed. Oracle Database 11g: New Features for Administrators 5 - 24
Using Enterprise Manager for Workload Replay At this phase, the recorded data is transformed into a more suitable format. This is done offline and preferably on a system different from the production system as it is resource intensive. The capture processing output can be used for multiple replays as a one-time activity when you use it on the same RDBMS version that is to be used for the replay. If the captured data has been already processed for a given RDBMS version, say A, then you must perform the process capture phase again if it is required to perform replay on a RDBMS version that is newer than A. The following actions are performed at this phase: • Transform workload capture data files into suitable replay streams, the replay files. • Produce all necessary metadata. This phase is equivalent to the functionality in the procedure DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE. RAC Note: In a RAC setup one database instance of the replay system is selected for the processing of the workload recording. If recorded data was written to a local file system for nodes in RAC, the recorded data files from all the nodes in the RAC should first be copied to the directory for the instance on which the preprocessing is to be done. If the captured data is stored in a shared file system, copying is not necessary.
Oracle Database 11g: New Features for Administrators 5 - 25
Using Enterprise Manager for Workload Replay You select the required captured data from the replay history table (if one exists) and click Set Up Replay to begin the replay process.
Oracle Database 11g: New Features for Administrators 5 - 26
Using Enterprise Manager for Workload Replay The Set Up Replay phase can be done multiple times on the processed capture data. Above offers a verification of your completion of the necessary steps. • Restoring the Database: You need to restore the database objects used during capture to an equivalent state as of the StartSCN, the system SCN at which the recording actually started. • Perform System Changes: The intent is to test your workload under some different environment, so you make the necessary environment changes here. • Resolve references to external clients: A captured workload may contain references to external interaction that may only be meaningful in the capture environment. You should fix all the references prior to replay to ensure replaying a workload will not cause any harm to your production environment. Replaying a workload with unresolved references to an external interaction may affect your production environment • Set up replay clients. Workload is replayed using replay clients connected to the replay database. You should install these replay clients preferably on systems other than the database host. In addition, each replay client must be able to access the replay directory.
Oracle Database 11g: New Features for Administrators 5 - 27
Using Enterprise Manager for Workload Replay References to resolve:
Using Enterprise Manager for Workload Replay A captured workload may contain references to external interactions (connection strings, database links, directory objects) that may only be meaningful in the capture environment. Replaying a workload with unresolved references to external interaction may cause unexpected problems in the production environment.A replay should be performed in a completely isolated test environment (for example hosts, networks, e-mail servers, storage systems.) You should ensure that all references to external interaction have been resolved in the replay environment so that replaying a workload will cause no harm to your production environment. RAC Note: In a RAC system the replay data files should be stored in the shared storage or copied to the appropriate local directories so that all the database instances in the RAC and all the replay clients can access them.The re-mapping of external interactions should include the re-mapping of instances. In particular, every captured connection string probably needs to be remapped to a connection string in the replay system. If the capture system is a single instance database and the replay system is also a single instance database, the re-mapping of the connection string is straightforward and involves adding the appropriate entry to the configuration file. The same is valid when both the capture and the replay systems are RAC databases with the same number of nodes. That is, there is a 1-1 mapping of the connections strings of the capture system to the connection strings of the replay system. Re-mapping becomes more complicated if the capture and the replay system have a different number of nodes. Oracle Database 11g: New Features for Administrators 5 - 28
Using Enterprise Manager for Workload Replay You use either the default options or options from a previous replay. The next step allows you to further customize the chosen configuration.
Oracle Database 11g: New Features for Administrators 5 - 29
Using Enterprise Manager for Workload Replay SYNCHRONIZATION: Turns synchronization on (TRUE) - the default - or off (FALSE) during workload replay. This allows you to turn off SCN based synchronization of the replay. This is desirable if the workload consists of transactions that do not heavily depend on each other and therefore any divergence during replay is acceptable. Such a mode of replay will probably yield significant data divergence. Therefore, the data divergence metrics cannot be used to indicate whether it makes sense to look at the performance divergence. CONNECT_TIME_SCALE: Scales the time elapsed between the instant the workload capture was started and session connects with the given value. The input is interpreted as a percentage value. Can potentially be used to increase or decrease the number of concurrent users during the workload replay THINK_TIME_SCALE: Scales the time elapsed between two successive user calls from the same session. The input is interpreted as a percentage value. Setting this value to 0 sends requests to the database as fast as possible. THINK_TIME_AUTO_CORRECT: Auto corrects the think time between calls appropriately when user calls takes longer time to complete during replay than how long the same user call took to complete during the original capture. The input is interpreted as a percentage value. Please note that think_time_auto_correct corrects the think time that is calculated based on the think_time_scale. If TRUE, it reduces (or increases) the think time when the replay goes slower (or faster) than capture. If FALSE, it does nothing. Oracle Database 11g: New Features for Administrators 5 - 31
Using Enterprise Manager for Workload Replay Workload is replayed using replay clients connected to the database. You should be ready to start the replay clients at this point. When you are ready to start the replay clients, click Next and then start the clients.
Oracle Database 11g: New Features for Administrators 5 - 32
Using Enterprise Manager for Workload Replay The workload replay wizard waits for you to start the replay clients. Open a separate terminal window and start the replay clients. You can start multiple replay clients depending on the workload replay size. Each of the clients initiates one or more replay threads with the RDBMS with each replay thread corresponding to a stream from the workload capture. The replay clients are started after the database server has entered replay PREPARE mode from the wizard, using the syntax illustrated above. The parameters userid and password are the user ID and password of the replay user for the client. The parameter server is a connection string that connects to the instance of the replay system. The parameter replaydir points to the directory that contains the processed replay files. The parameter workdir defines the client’s working directory. If left unspecified it defaults to the current directory. You should make sure that the following has been done before starting the replay clients: • The replay client software is installed on the hosts • The client has access to the replay directory • The replay directory has the replay files that have been preprocessed • The userid and password for the replay user is correct. Furthermore, it should be able to use the workload replay package and has the user SWITCH privilege.
Oracle Database 11g: New Features for Administrators 5 - 33
Using Enterprise Manager for Workload Replay Once all the required replay clients successfully connect to the database, you are asked to review the replay setup and submit the job. You then see the progress window as illustrated above, giving comparison statistics as the replay progresses. The monitor of replay shows you the replay progress and includes the following data: • Row count differences • Failed statements • Replay speed • Replay errors or warnings • Current time, time divergence (time deficit) and estimated time to finish You can terminate the replay at any stage with the Stop Replay button (not shown in above screenshot). On successful completion of the replay, your terminal window that started the replay clients displays an information message Replay Finished followed by a time stamp. The replayed workload is now complete and you can utilize existing manageability tools such as AWR, ASH for additional system performance information. RAC Note: If a specific captured instance is mapped to a new instance in the replay system, all the captured calls for the captured instances are sent to the new one. If the replay system is also RAC and a captured instance is mapped to the runtime load balancing of the replay system, all the captured calls for that recorded instance are dynamically distributed to instances in the replay RAC system using runtime load balancing Oracle Database 11g: New Features for Administrators 5 - 34
Packages and Procedures You require EXECUTE privilege on the following packages: • DBMS_WORKLOAD_CAPTURE – – – – –
Packages and Procedures You require the EXECUTE privilege on the capture and replay packages to execute these packages. These privileges are usually assigned by the DBA. The DBMS_WORKLOAD_* procedures are detailed below: START_CAPTURE • NAME: Names the workload capture period for future references. • DIR: Specifies the directory where the capture is stored. Should be a valid DIRECTORY object and have enough disk space to contain the entire capture data. • DURATION:Specifies the duration in minutes that the capture is to continue. By default, the capture continues until FINISH_CAPTURE is called. • DEFAULT_ACTION: Specifies whether INCLUDE or EXCLUDE filters are created. • NO_RESTART_MODE:Specifies whether the database should be restarted before capture begins. Default is FALSE. FINISH_CAPTURE • TIMEOUT: Specifies the time in seconds that the procedure should wait before timing out. Returns an error if the RDBMS is not currently capturing data. REPORT • DIR: Specifies the directory where the workload capture exists that the report is to be run on. Must be a valid DIRECTORY object. • FORMAT: Values are TEXT, HTML or XML. Oracle Database 11g: New Features for Administrators 5 - 35
Packages and Procedures ADD_FILTER • FILTER_NAME: Specifies a filter name. • ATTRIBUTE: Specifies an attribute on which the filter is to be applied. Values are: PROGRAM, MODULE, ACTION, SERVICE, SESSION_ID, USERNAME. • VALUE: Specifies a value for the attribute. DBMS_WORKLOAD_REPLAY • PROCESS_CAPTURE(TARGET_DIR) • INITIALIZE_REPLAY(REPLAY_DIR) • PREPARE_REPLAY(REPLAY_NAME, REPLAY_DIR, DEFAULT_ACTION, SYNCHRONIZATION, CONNECT_TIME_SCALE, THINK_TIME_SCALE, THINK_TIME_AUTO_CORRECT) • START_REPLAY() • CANCEL() REMAP_CONNECTION(CONNECTION_ID, • REPLAY_CONNECTION)
Oracle Database 11g: New Features for Administrators 5 - 36
Data Dictionary Views: Database Replay
• DBA_WORKLOAD_CAPTURES: Lists all the workload captures performed in the database • DBA_WORKLOAD_FILTERS: Lists all the workload filters defined in the database • DBA_WORKLOAD_REPLAYS: Lists all the workload replays that have been performed in the database • DBA_WORKLOAD_REPLAY_DIVERGENCE:Used to monitor workload divergence • DBA_WORKLOAD_CONNECTION_MAP: Used to review all connection strings used by workload replays
Oracle Database 11g: New Features for Administrators 5 - 38
Practice 5: Overview
This practice covers the following topics: • Use Database Replay using Enterprise Manager with the following scenarios: – Replay in synchronous mode without changes – Replay in synchronous mode after changes are applied – Replay in non-synchronous mode without changes
After completing this lesson, you should be able to: • Setup and modify Automatic SQL Tuning • Use PL/SQL interface to perform fine tuning • View and interpret reports generated by Automatic SQL Tuning
ASM Fast Disk Resync Overview To better understand the following slides, you can spend some time playing the following mini lesson at: http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20CurriculumPublic/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR 1_Mini_Lessons/11gR1_Beta1_Auto_SQL_Tuning_JFV/11gR1_Beta1_Auto_SQL_Tuning_view let_swf.html
Oracle Database 11g: New Features for Administrators 6 - 3
SQL Tuning in Oracle Database 10g Oracle Database 10g introduced the SQL Tuning Advisor to help DBAs and application developers improve the performance of SQL statements. The advisor targets the problem of poorly written SQL, in which a SQL statement has not been designed in the most efficient fashion, as well as the (more common) problem where a SQL is performing poorly because the optimizer generated a poor execution plan due to a lack of accurate and relevant data statistics. In all cases, the advisor makes specific suggestions on how to speed up SQL performance, but it leaves the responsibility of implementing recommendations to the user. In addition to the SQL Tuning Advisor, Oracle Database 10g has an automated process to identify high load SQL statements in your system. This is done by the Automatic Database Diagnostic Monitor (ADDM), which automatically identifies high load SQL statements that are good candidates for tuning. However, major issues still remain: While it is true that ADDM identifies some SQL that should be tuned, users must manually look at ADDM reports and run SQL Tuning Advisor on them for tuning.
Oracle Database 11g: New Features for Administrators 6 - 4
Automatic SQL Tuning in Oracle Database 11g AWR To
Automatic SQL Tuning in Oracle Database 11g Oracle Database 11g further automates the SQL Tuning process by identifying problematic SQL statements, running SQL Tuning Advisor on them, and implementing the resulting SQL Profile recommendations to tune the statement without requiring any user intervention. Automatic SQL Tuning uses the AUTOTASK framework through a new task called Automatic SQL Tuning that runs every night by default. Here is a brief description of the automated SQL tuning process: 1) Based on AWR Top SQL identification (SQLs that were top in four different time periods: the past week, any day in the past week, any hour in the past week, or single response time), Automatic SQL Tuning targets for automatic tuning. 2) and 3) While the Automatic SQL Tuning task is executing during the maintenance window, the previously identified SQL statements are automatically tuned by invoking the SQL Tuning Advisor, and as a result, SQL Profiles will be created for them if needed. Before making any decision, the new profile is carefully tested. 4) At any point in time, you can request a report about these automatic tuning activities. You then have the option to check the tuned SQL statements to validate or remove the automatic SQL Profiles which were generated.
Oracle Database 11g: New Features for Administrators 6 - 5
Summary of Automation in 11g
• • • • • •
6-6
Task runs automatically (Autotask Framework) Workload automatically chosen (no SQL Tuning Set) SQL profiles automatically tested SQL profiles automatically implemented SQLs automatically re-tuned if they regress Reporting available over any time period
Oracle Database 11g: New Features for Administrators 6 - 6
Picking Candidate SQL
AWR Weekly
Daily
Hourly
Average Exec
Candidate List
1. Pull the top queries from the past week into four buckets: Top for the past week Top for any day in the past week Top in any hour (single snapshot) Top by average single execution 2. Combine four buckets into one, assigning weights 3. Cap at 150 queries per bucket 6-7
Automatic SQL Tuning in Oracle Database 11g Oracle Database 11g further automates the SQL Tuning process by identifying problematic SQL statements, running SQL Tuning Advisor on them, and implementing the resulting SQL Profile recommendations to tune the statement without requiring any user intervention. Automatic SQL Tuning uses the AUTOTASK framework through a new task called Automatic SQL Tuning that runs every night by default. Here is a brief description of the automated SQL tuning process: 1) Based on AWR Top SQL identification (SQLs that were top in four different time periods: the past week, any day in the past week, any hour in the past week, or single response time), Automatic SQL Tuning targets for automatic tuning. 2) and 3) While the Automatic SQL Tuning task is executing during the maintenance window, the previously identified SQL statements are automatically tuned by invoking the SQL Tuning Advisor, and as a result, SQL Profiles will be created for them if needed. Before making any decision, the new profile is carefully tested. 4) At any point in time, you can request a report about these automatic tuning activities. You then have the option to check the tuned SQL statements to validate or remove the automatic SQL Profiles which were generated.
Oracle Database 11g: New Features for Administrators 6 - 7
Maintenance Window Timeline The Automatic SQL Tuning process takes place during the maintenance window. Furthermore, it runs as part of a single AUTOTASK job on a single instance to avoid concurrency issues. This is portrayed in the above graphic for one possible sample scenario: In this scenario, at some time after the beginning of the maintenance window, AUTOTASK starts up the Automatic SQL Tuning job. The first thing it does is to generate a list of candidate SQL for tuning, according to AWR source. Once the list is complete it tunes each statement in order of importance, one after another, only considering one statement at a time. In this scenario, it first tunes S1, which has a SQL Profile recommendation (P1) generated for it by the SQL Tuning Advisor. Once P1 has been successfully tested, it is accepted, and the job moves on to the next statement, S2. Note: Widths of boxes in the diagram above do not indicate relative execution times. Tuning and test execution should be the most expensive processes by far, with all the others completing relatively quickly.
Oracle Database 11g: New Features for Administrators 6 - 8
Automatic Tuning Process Not considered Restructure SQL
Automatic Tuning Process With the list of candidate SQL already built and ordered, the statements are then tuned using the SQL Tuning Advisor, and any SQL Profiles recommended that improve the performance of the SQL significantly, are implemented automatically. In Oracle Database 11g, the performance improvement factor has to be at least 3 before a SQL Profiles is implemented. As we have already mentioned, the Automatic SQL Tuning process only implements SQL Profile recommendations automatically. Other recommendations to create new indexes or to refresh stale statistics or to restructure SQL statements are generated as part of the SQL tuning process but are not implemented. These are left for the DBA to review and implement manually as appropriate. Here is a short description of the general tuning process: Tuning is performed on a per-statement basis. Because only SQL profiles can be implemented, there is no need to consider the effect of such recommendations on the workload as a whole. For each statement, in order of importance, the tuning process does the following: 1) Tune the statement using the SQL Tuning Advisor. Look for a SQL Profile and if it is found check if base optimizer statistics are current for it.
Oracle Database 11g: New Features for Administrators 6 - 9
Automatic Tuning Process (Continued) 2) If a SQL profile is recommended: • Test the new SQL profile by executing the statement with and without it. • When a SQL Profile is generated, and it causes the optimizer to pick a different execution plan for the statement, the advisor must decide whether or not to implement the SQL profile. It will make its decision according to the flowchart shown above. While the benefit thresholds here apply to the sum of CPU and IO time, SQL Profiles will not be accepted if there is a degradation in either statistic. So the requirement is that there is a three times improvement in the sum of CPU and IO time, with neither statistic becoming worse. This way the statement will run faster than it would without the profile, even with contention in CPU or IO. 3) If stale or missing statistics are found, make this information available to GATHER_STATS_JOB. Note: All SQL Profiles are created in the standard EXACT mode. They are matched and tracked according to the current value of the CURSOR_SHARING parameter. DBAs are responsible for setting CURSOR_SHARING to something appropriate for their workload.
Oracle Database 11g: New Features for Administrators 6 - 10
Focus on SQL Profiles
Auto-testing/implementing is limited to profiles because: • No lengthy, expensive set-up process (gathering stats, building an index takes time) • Private to the current compilation • No change to user SQL (does not change semantics) • SQL-level recommendation, can be effectively tested • Easily reversed by the DBA Testing is done for regular SQL tune tasks as well!
Oracle Database 11g: New Features for Administrators 6 - 11
DBA Controls
• Autotask configuration: – On/off switch – Maintenance windows running tuning task – CPU resource consumption of tuning task
• Task parameters: – – – – – –
6 - 12
SQL Profile implementation automatic/manual switch Global time limit for tuning task Per-SQL time limit for tuning task Disable test-execute to save time Max SQL profiles accepted per day / overall Task execution expiration period
Automatic SQL Tuning Task As it has already been mentioned, Automatic SQL Tuning is implemented as an automated maintenance task called Automatic SQL Tuning. You can see some high level information about the last runs of the Automatic SQL Tuning task by going to the Automated Maintenance Tasks page. You can get there from your Database Control Home page by clicking the Server tab, and once on the Server page, by clicking the Automated Maintenance Tasks link in the Tasks section. On the Automated Maintenance Tasks page, you can see the predefined tasks. You can then access each task by clicking on the corresponding link to get more information about the task itself. This is illustrated on the above slide. When you click on either the Automatic SQL Tuning link or the latest execution icon (the green area on the timeline), you go to the Automatic SQL Tuning Result Summary page.
Oracle Database 11g: New Features for Administrators 6 - 13
Automatic SQL Tuning Configuration Although Automatic SQL Tuning is enabled by default, you have the possibility to control its execution from Enterprise Manager. On the Automatic SQL Tuning Result Summary page, you can click Configure. This takes you to the Auto Task Configuration page from where you can Disable or Enable Automatic SQL Tuning. By default, Automatic SQL Tuning executes on all predefined maintenance windows in the MAINTENANCE_WINDOW_GROUP. You can disable it for specific days in the week. From this page, you also have the possibility to edit each Window to change its characteristics. You can do so by clicking Edit Window Group. Note: In addition to the above, if you set STATISTICS_LEVEL to BASIC, turn off AWR snapshots using DBMS_WORKLOAD_REPOSITORY, or if AWR retention is less than seven days, you also stop Automatic SQL Tuning.
Oracle Database 11g: New Features for Administrators 6 - 14
Automatic SQL Tuning Result Summary In addition to give you the possibility to control the Automatic SQL Tuning task, the Automatic SQL Tuning Result Summary page also contains various summary graphs. A particular example is given on the slide. The first chart in the Overall Task Statistics shows you the breakdown by finding types for the designated period of time. You can control the period of time for which you want the report to be generated by selecting a value from the Time Period dropdown list. In the above example, All is used. This covers all executions of the task thus far. Users can request it for any time period over the past month, since that is the amount of time the advisor persists its tuning history. On the Breakdown by Finding Type graph, you can clearly see that only SQL profiles can be implemented, and although a lot more were recommended, not all of them were automatically implemented for the reasons we already explained. Similarly, recommendations for index creation, and the other types are not implemented. However, the advisor keeps historical information about all recommendations in case you want to implement them later.
Oracle Database 11g: New Features for Administrators 6 - 15
Automatic SQL Tuning Result Recommendations From the Automatic SQL Tuning Result Summary page, you can drilldown to the Automatic SQL Tuning Result Details page by clicking the View Report button as shown on the previous slide. From the Automatic SQL Tuning Result Details page you have the possibility to select one statement and click the View Recommendations button. This takes you to the Recommendations for SQL ID page for the corresponding statement. On this page, you can look at the new explain plan, or compare explain plans if the corresponding recommendation was implemented. This is the case in the example shown on the slide. You also have the possibility to implement manually a recommendation that was not automatically implemented by Automatic SQL Tuning.
Oracle Database 11g: New Features for Administrators 6 - 16
Automatic SQL Tuning Result Details On the Automatic SQL Tuning Result Details page you can also see for each automatically tuned SQL statement various important information: its SQL text and SQL ID, the type of recommendation that was done by the SQL Tuning Advisor, the verified benefit percentage if the recommendation was automatically applied, and the date of the recommendation. From this page, you can either drilldown to the SQL statement itself by clicking on its corresponding SQL ID link (shown on the slide), or select one of the SQL statement, and click the View Recommendations button to have more details about the recommendation for that statement.
Oracle Database 11g: New Features for Administrators 6 - 17
Automatic SQL Tuning Result Details Drilldown When you drilldown to a particular SQL statement from the Automatic SQL Tuning Result Details page, you end up on the SQL Details page of that statement. There, you can find out about various statistics about the statement, look at its past or current activity, or find out about its tuning information. This last drilldown possibility is illustrated on the slide. In the example, you can see that there is currently a profile associated to that statement, and the profile was automatically created by SYS_AUTO_SQL_TUNING_TASK. From that page, you can manage this profile by changing its category, delete it, or disable it.
Oracle Database 11g: New Features for Administrators 6 - 18
Automatic SQL Tuning Fine Tune
• Use DBMS_SQLTUNE: – SET_TUNING_TASK_PARAMETER – EXECUTE_TUNING_TASK – REPORT_AUTO_TUNING_TASK
Automatic SQL Tuning Fine Tune You can use the DBMS_SQLTUNE PL/SQL package to control various aspects of the SYS_AUTO_SQL_TUNING_TASK: 1) SET_TUNING_TASK_PARAMETERS: The following parameters are supported for the automatic tuning task only: • ACCEPT_SQL_PROFILES: TRUE/FALSE whether the system should accept SQL Profiles automatically. • REPLACE_USER_SQL_PROFILES: TRUE/FALSE whether the task should replace SQL Profiles created by the user. • MAX_SQL_PROFILES_PER_EXEC: Maximum number of SQL Profiles to create per run • MAX_AUTO_SQL_PROFILES: Maximum number of automatic SQL Profiles allowed on the system in total 2) EXECUTE_TUNING_TASK function: used to manually run a new execution of the task in the foreground (behaves just like it would when it runs in the background) 3) REPORT_AUTO_TUNING_TASK: get a text report covering a range of task executions You can enable and disable SYS_AUTO_SQL_TUNING_TASK using the DBMS_AUTO_TASK_ADMIN PL/SQL package.
Oracle Database 11g: New Features for Administrators 6 - 19
Automatic SQL Tuning Fine Tune (Continued) You can also access various Automatic SQL Tuning information through the above highlighted dictionary views: • DBA_ADVISOR_EXECUTIONS: get data about each execution of the task • DBA_ADVISOR_SQLSTATS: see the test-execute statistics generated from when we are testing the SQL Profiles • DBA_ADVISOR_SQLPLANS: see the plans encountered during test-execute
Oracle Database 11g: New Features for Administrators 6 - 20
Getting Reports Using PL/SQL Interface The above example shows you how to invoke the REPORT_AUTO_TUNING_TASK function to get a text report related to two executions of the task. The first command lists executions, and the second gives the summary and general information about executions.
Oracle Database 11g: New Features for Administrators 6 - 21
Automatic SQL Tuning Considerations
• SQL not considered for Automatic SQL Tuning: – – – – –
Ad hoc or rarely repeated SQL Parallel queries Still long-running queries after profiling Recursive SQL statements DMLs and DDLs
• Above categories can still be manually tuned using SQL Advisor.
Automatic SQL Tuning Considerations Automatic SQL Tuning does not seek to solve every SQL performance issue occurring on a system. It not aim to tune the following types of SQL: • Ad hoc or rarely repeated SQL: If a SQL is not executed multiple times in the same form, the advisor ignores it. SQL that do no repeat within a week are not considered as well. • Parallel queries are not considered for tuning. • Long-running queries (post-profile): If a query takes too long to run after being SQL profiled, it will not be practical to test-execute and, therefore, will be ignored by the advisor. Note that this does not mean the advisor ignores all long-running queries. If the advisor can find a SQL profile that causes a query that once took hours to run in minutes, it could still be accepted because test-execution is still possible. The advisor would execute the old plan just long enough to determine it is worse than the new one, and then terminate test execution without waiting for the old plan to finish, thus switching the order of their execution. • Recursive SQL statements • DMLS like INSERT SELECT statements, or DDLs like CREATE TABLE AS SELECT. With the exception of truly ad-hoc SQL, these limitations apply to Automatic SQL Tuning only. Such statements can still be tuned by manually running the SQL Tuning Advisor.
Oracle Database 11g: New Features for Administrators 6 - 22
Summary
In this lesson, you should have learned how to: • Setup and modify Automatic SQL Tuning • Use PL/SQL interface to perform fine-tuning • View and interpret reports generated by Automatic SQL Tuning
Oracle Database 11g: New Features for Administrators 7 - 1
Objectives
After completing this lesson, you should be able to: • Create AWR baselines for future time periods • Identify the views that capture foreground statistics • Control Automated Maintenance Tasks • Resource Manager • Scheduler
Oracle Database 11g: New Features for Administrators 7 - 3
Comparative Performance Analysis with AWR Baselines Performance
• Actual
•
AWR Baseline contains a set of AWR snapshots for an “interesting or reference” period of time Baseline is key for performance tuning to – guide set alert thresholds – monitor performance – compare advisor reports
Oracle Database 11g: New Features for Administrators 7 - 4
Automatic Workload Repository Baselines Oracle Database 11g further enhances the Automatic Workload Repository baselines. • Out-of-the-box Moving Window Baseline for which you can specify adaptive thresholds • Schedule the creation of a Baseline using Baseline Templates • Rename baselines, and set expiration dates for baselines
Automatic Workload Repository Baselines Oracle Database 11g consolidates the various concepts of baselines in Oracle, specifically Enterprise Manager and RDBMS, into the single concept of the Automatic Workload Repository (AWR) baseline. Oracle Database 11g AWR baselines provide powerful capabilities for defining dynamic and future baselines and considerably simplify the process of creating and managing performance data for comparison purposes. Oracle Database 11g introduces the concept of the moving window baselines. A system-defined moving window baseline that corresponds to all the AWR data within the AWR retention period is created by default. In Oracle Database 11g baselines are enabled by default as long as STATISTICS_LEVEL=TYPICAL or ALL.
Oracle Database 11g: New Features for Administrators 7 - 5
Moving Window Baseline
There is one moving window baseline: • SYSTEM_MOVING_WINDOW: A moving window baseline that corresponds to the last 8 days of AWR data • Created out-of-the-box in 11g • By default, Adaptive thresholds functionality computes stats on this Baseline
Moving Window Baseline There is a system-defined moving window baseline created by default that corresponds to the complete set of snapshot data within the AWR retention period. The N days setting is NULL, therefore the window size always matches the AWR retention setting. This system-defined baseline provides a default out-of-the-box baseline for EM performance screens to compare the performance with the current database performance. Note: The default retention period for snapshot data has been changed from seven days to eight days in Oracle Database 11g to ensure the capture of an entire week of performance data.
Oracle Database 11g: New Features for Administrators 7 - 6
Oracle Database 11g: New Features for Administrators 7 - 7
Baseline Templates • Allows you to schedule the creation of Baselines for future time period(s) of interest – Single Time Period in the future – Repeating Schedule
•
For example: – A known holiday weekend – Every Monday morning from 10am-2pm
• Once the Baseline Template has been specified for a future time period, MMON will detect when the end time has passed and will create the Baseline.
Baseline Templates Creating baselines for future time periods allows you to mark time periods that you know will be interesting. For example, you may want the system to automatically generate a baseline for every Monday morning for the whole year, or you can ask the system to generate a baseline for an upcoming holiday weekend if you suspect that it is a high-volume weekend. Previously, you could only create baselines on snapshots that already existed. A nightly MMON task goes through all the templates for baseline generation and checks to see if any time ranges have changed from the future to the past within the last day. For the relevant time periods, the MMON task then creates a baseline for the time period.
Oracle Database 11g: New Features for Administrators 7 - 8
Creating AWR Baselines You can create two types of AWR Baselines: Single and Repeating. Both types are explained on the slide. To get access to the Baseline page, you can click the AWR Baselines link on the Server tab of the Database Instance page. Once on the Baseline page, click Create and follow the wizard to create your baseline. Note: Before you can setup AWR Baseline Metric Thresholds for a particular baseline, you need to compute the baseline statistics which is a possible action from the Baselines page. Other possible actions not shown on the slide are Customize Performance Page, and Run AWR Report.
Oracle Database 11g: New Features for Administrators 7 - 9
Single AWR Baseline If you select the Single option in the previous step, you end up on the page shown on this slide. There, you can select the time period corresponding to your interest. Once done, click OK to create the static baseline. Note: If both the Start Time and the End Time are in the future, a baseline template with the same name as the baseline will be created.
Oracle Database 11g: New Features for Administrators 7 - 10
Creating Repeating Baseline Template You can define repeating baselines using Enterprise Manager. In the wizard, once you selected Repeating at step one, you can specify the repeat interval as shown on the slide.
Oracle Database 11g: New Features for Administrators 7 - 11
Generate Baseline for Single Time Period in Future Interesting time period
….. T4
T5
T6
…..
Tx
Ty
Tz
DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( start_time IN time stamp , end_time IN time stamp , baseline_name IN VARCHAR2 , template_name IN VARCHAR2 , expiration IN NUMBER , dbid IN NUMBER DEFAULT NULL ) ;
Generate Baseline for Single Time Period in Future You can now create a template for how baselines are to be created for different time periods, either in the future for predictable schedules or for past timelines. The Manageability infrastructure generates a task using these inputs and automatically creates a baseline for the specified time period, or for when the time comes. Using time-based definitions in the baseline creation does not require to identify the start- and end- snapshot identifiers. For the CREATE_BASELINE and CREATE_BASELINE_TEMPLATE procedures you can also now specify an expiration duration. The expiration duration, specified in days, represents the number of days you want the baselines to be maintained for. A value of NULL means the baselines never expire. The above example illustrates a template creation for a single time period.
Oracle Database 11g: New Features for Administrators 7 - 12
Creating Repeating Baseline Template DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE_TEMPLATE ( day_of_week IN VARCHAR2,
Creating Repeating Baseline Template You can use the syntax above to generate baseline templates that automatically create baselines for a contiguous time period based on a repeating time schedule. You can also specify whether you wish the baseline to be automatically removed after a specified expiration interval (expiration). The values of the CREATE_BASELINE_TEMPLATE procedure are: • day_of_week: Day of week that the baseline should repeat on. Specify one of the following values (‘SUNDAY’, ‘MONDAY’, ‘TUESDAY’, ‘WEDNESDAY’, ‘THURSDAY’, ‘FRIDAY’, ‘SATURDAY’). • hour_in_day: A value of 0-23 specifies the hour in the day the baseline should start. • duration: The duration (in hours) after hour_in_day that the baseline should last. • start_time: Effective time to start generating baselines (once converted to nearest snapshot ID.) • end_time: Effective time to stop generating baselines (once converted to nearest snapshot ID.) • baseline_name_prefix: Name for baseline prefix. When creating the baseline, the name of the baseline is the prefix appended with the date information. • template_name: Name for the template • expiration: The expiration (in days) to maintain the created baselines for. If NULL, then expiration is infinite, meaning do not drop baseline ever. Defaults to NULL. • dbid: Database Identifier for baseline. If NULL, then use the database identifier for the local database. Defaults to NULL. Oracle Database 11g: New Features for Administrators 7 - 13
DBMS_WORKLOAD_REPOSITORY Package
• The following procedures have been added: – – – –
DBMS_WORKLOAD_REPOSITORY Package Oracle Database 11g offers the above set of PL/SQL interfaces in the DBMS_WORKLOAD_REPOSITORY package for administration and filtering. MODIFY_BASELINE_WINDOW_SIZE allows you to modify the size of the SYSTEM_MOVING_WINDOW.
Oracle Database 11g: New Features for Administrators 7 - 14
DBA_HIST_BASELINE Modified View New column
Description
BASELINE_TYPE
Values: ‘Static’, ‘Moving Window’, or ‘Generated’.
MOVING_WINDOW_SIZE
If Baseline Type is ‘Moving Window’, this field is the size of the Moving Window in number of days. If NULL, then the window size is the value of the AWR retention setting.
CREATION_TIME
Time the Baseline was created
EXPIRATION
Expiration setting for the Baseline in number of Days. The Baseline is maintained for this time period. NULL keeps the Baseline data forever.
TEMPLATE_NAME
Name of the template that created this Baseline, if any
LAST_TIME_COMPUTED
Last time adaptive threshold statistics were computed over the baseline
DBA_HIST_BASELINE Modified View For the baseline_type of Static, these baselines are created manually by you. For the ‘Moving Window’ Baseline, the Start and End Snapshot IDs are dynamic. The ‘Generated’ baselines are automatically generated by the system using a template. SQL> desc dba_hist_baseline Name Null? --------------------------------- -------DBID BASELINE_ID BASELINE_NAME BASELINE_TYPE START_SNAP_ID START_SNAP_TIME END_SNAP_ID END_SNAP_TIME MOVING_WINDOW_SIZE CREATION_TIME EXPIRATION TEMPLATE_NAME LAST_TIME_COMPUTED
Type -----------NUMBER NUMBER VARCHAR2(64) VARCHAR2(13) NUMBER TIMESTAMP(3) NUMBER TIMESTAMP(3) NUMBER DATE NUMBER VARCHAR2(64) DATE
Oracle Database 11g: New Features for Administrators 7 - 15
DBA_HIST_BASELINE_DETAILS New View
New column
Description
INSTANCE_NUMBER
Instance ID for Baseline Data
SHUTDOWN
Shows if there is a database startup/shutdown in this interval. Can take the values: (‘YES’, ‘NO’, NULL)
ERROR_COUNT
Count of errors in the snapshots in the Baseline snapshot range.
PCT_TOTAL_TIME
Amount of time captured in snapshots divided by the total possible time for this Baseline.
DBA_HIST_BASELINE_DETAILS New View Oracle Database 11g displays information that allows you to determine the validity of given baseline. The PCT_TOTAL_TIME column provides a measure of how much of the snapshot data exists within the baseline. SQL> desc dba_hist_baseline_details Name Null? --------------------------------- -------DBID INSTANCE_NUMBER BASELINE_ID BASELINE_NAME BASELINE_TYPE START_SNAP_ID START_SNAP_TIME END_SNAP_ID END_SNAP_TIME SHUTDOWN ERROR_COUNT PCT_TOTAL_TIME LAST_TIME_COMPUTED MOVING_WINDOW_SIZE CREATION_TIME EXPIRATION TEMPLATE_NAME
Type -------------NUMBER NUMBER NUMBER VARCHAR2(64) VARCHAR2(13) NUMBER TIMESTAMP(3) NUMBER TIMESTAMP(3) VARCHAR2(3) NUMBER NUMBER DATE NUMBER DATE NUMBER VARCHAR2(64)
Oracle Database 11g: New Features for Administrators 7 - 16
DBA_HIST_BASELINE_TEMPLATE New View Column
Description
TEMPLATE_ID
Internal ID for the template on how to generate the baseline.
TEMPLATE_TYPE
Values: ‘SINGLE’, ‘REPEATING’. ‘SINGLE’ means just one time period. ‘REPEATING’ means to maintain a time period.
START_TIME
Start Time for the future Baseline. Used for ‘SINGLE’. For ‘REPEATING’, this is the Effective Start Time where Baselines should start being generated.
END_TIME
End Time for the future Baseline. Used for ‘SINGLE’. For ‘REPEATING’, this is the Effective End Time where Baselines should stop being generated.
DAY_OF_WEEK
Day of week to create baseline. Is one of the following values (‘SUNDAY’, ‘MONDAY’, ‘TUESDAY’, ‘WEDNESDAY’, ‘THURSDAY’, ‘FRIDAY’, ‘SATURDAY’, ‘ALL’). Used for ‘REPEATING’.
HOUR_IN_DAY
Value of 0-23 to specify the Hour in the Day to create the Baseline for. Used for ‘REPEATING’.
DURATION
Length of the time period for Baseline to be created. Used for ‘REPEATING’.
REPEAT_INTERVAL
String that represents the time repeating information in the DBMS_SCHEDULER format
LAST_GENERATED
Last time a Baseline was generated for this template
DBA_HIST_BASELINE_TEMPLATE New View SQL> desc dba_hist_baseline_template Name Null? ------------------------------- -------DBID NOT NULL TEMPLATE_ID NOT NULL TEMPLATE_NAME NOT NULL TEMPLATE_TYPE NOT NULL BASELINE_NAME_PREFIX NOT NULL START_TIME NOT NULL END_TIME NOT NULL DAY_OF_WEEK HOUR_IN_DAY DURATION EXPIRATION REPEAT_INTERVAL LAST_GENERATED
Type -----------------NUMBER NUMBER VARCHAR2(30) VARCHAR2(9) VARCHAR2(30) DATE DATE VARCHAR2(9) NUMBER NUMBER NUMBER VARCHAR2(128) DATE
Oracle Database 11g: New Features for Administrators 7 - 17
Oracle Database 11g: New Features for Administrators 7 - 18
Performance Monitoring and Baselines • Performance alert thresholds are difficult to determine: – Expected metric values vary by workload type – Expected metric values vary by system load
• Baselines can capture metric value statistics: – Automatically computed over system moving window – Manually computed over static baselines
Oracle Database 11g: New Features for Administrators 7 - 19
Performance Monitoring and Baselines • Baseline metric statistics can be used to determine alert thresholds: – Unusual values vs. baseline data = significance level thresholds – Close or exceeding peak value over baseline data = percent of maximum thresholds
Defining Alert Thresholds Using Baseline Once AWR baseline statistics are computed for a particular baseline, you can set metric thresholds specific to your baseline. You can compute baseline statistics directly from the Baselines page as discussed earlier. Then, go to the AWR Baseline Metric Thresholds page, select the type of metrics you want to set. Once done, select a specific metric and click Edit Thresholds. On the corresponding Edit AWR Baseline Metric Thresholds page, specify your thresholds in the Thresholds Settings section, and click Apply Thresholds. You can specify thresholds based on statistics computed for your baseline. This is illustrated on the slide. In addition to Significance Level, other possibilities are Percentage of Maximum, and Fixed Values. Note: Once a threshold is set using Baseline Metric Thresholds, the previous threshold values are forgotten forever and the statistics from the associated baseline will drive threshold until they are cleared (via Baseline Metric Threshold UI or PL/SQL interface).
Oracle Database 11g: New Features for Administrators 7 - 21
Using EM to Quickly Configure Adaptive Thresholds Oracle Database 11g Enterprise Manager provides significant usability improvements in the selection of adaptive thresholds for database performance metrics with full integration with AWR baselines as the source for the metric statistics. EM offers a quick configuration option in a oneclick starter set of thresholds based on OLTP or Data Warehouse workload profiles. You make the selection of the appropriate workload profiles from the subsequent pop-up window. By making this simple selection, the system will automatically configure and evolve Adaptive Thresholds based on the SYSTEM_MOVING_WINDOW baseline for a group of metrics corresponding best to the chosen workload.
Oracle Database 11g: New Features for Administrators 7 - 22
Using EM to Quickly Configure Adaptive Thresholds You then confirm the creation of the desired workload baselines. Once configured, you can edit the threshold levels through the Edit Threshold button. The Warning Level and Critical Level columns indicate the type of alert generated. The Significance Level indicates if the level of observation is at or above a certain value. The following significance level thresholds are supported: • High, significant at 0.95 (5 in 100) level • Very High, significant at 0.99 (1 in 100) level • Severe, significant at 0.999 (1 in 1000) level • Extreme, significant at 0.9999 (1 in 10,000) level When editing threshold levels you are recommended to set significance level thresholds conservatively and experimentally at first. The threshold values are determined by examining statistics for the metric values observed over the baseline time period. The system sets the thresholds based on prior data from the system itself and some metadata (the statistic basically) provided by you. This is significantly easier in the multi-target case because you no longer need to know the system-specific metric. The statistics to monitor are the maximum value as well as the "significance levels". The significance levels let you set the threshold to a value that is statistically significant at the stated level (for example 1 in 1000).
Oracle Database 11g: New Features for Administrators 7 - 23
Changing Adaptive Threshold Settings
Threshold adapts automatically Observed value Baseline calculation
Changing Adaptive Threshold Settings Once adaptive thresholds are set, you can change their values if need be. You can do so as shown on the slide. On the Edit AWR Baseline Metric Thresholds page corresponding to the metric you want to modify, you can see the graphic history of the observed value for the metric as well as the materialization of the computed baseline value, and the corresponding adaptive threshold.
Oracle Database 11g: New Features for Administrators 7 - 24
V$SYSTEM_EVENT has five new NUMBER columns that represent the statistics from purely foreground sessions: • TOTAL_WAITS_FG • TOTAL_TIMEOUTS_FG • TIME_WAITED_FG • AVERAGE_WAIT_FG • TIME_WAITED_MICRO_FG V$SYSTEM_WAIT_CLASS has two new NUMBER columns that represent the statistics from purely foreground sessions: • TOTAL_WAITS_FG • TIME_WAITED_FG
Oracle Database 11g: New Features for Administrators 7 - 25
Oracle Database 10g introduced the execution of some automated maintenance tasks during a maintenance window. Specifically the automated task were: statistics collection and segment advisor. With Oracle Database 11g, the Automated Maintenance Tasks feature relies on the Resource Manager being enabled during the Maintenance Windows. The idea is to prevent maintenance work from consuming excessive amounts of system resources. In order to facilitate mapping of automated tasks to specific windows, the above Maintenance Windows are created in place of the existing WEEKNIGHT_WINDOW and WEEKEND_WINDOW windows inside the MAINTENANCE_WINDOW_GROUP window group. You are still completely free to define other Maintenance Windows, as well as change start times and durations for the windows listed above. Likewise, any Maintenance Windows that are deemed unnecessary can be disabled or removed. The operations can be done using EM or Scheduler interfaces.
Oracle Database 11g: New Features for Administrators 7 - 26
Default Maintenance Resource Manager Plan SQL> show parameter resource_manager_plan NAME TYPE VALUE ---------------------- ------- -----------------------------------------resource_manager_plan string SCHEDULER[0x2843]: DEFAULT_MAINTENANCE_PLAN
When a maintenance window opens, DEFAULT_MAINTENANCE_PLAN resource manager plan is automatically set to control the amount of CPU used by automated maintenance tasks. To be able to give different priorities to each possible task during a maintenance window, various consumer groups are assigned to DEFAULT_MAINTENANCE_PLAN. The hierarchy between groups and plans is shown on the above slide. For high priority tasks: • Optimizer Statistics Gathering automatic task is assigned to the ORA$AUTOTASK_STATS_GROUP consumer group. • Segment Advisor automatic task is assigned to the ORA$AUTOTASK_SPACE_GROUP consumer group. • Automatic SQL Tuning automatic task is assigned to the ORA$AUTOTASK_SQL_GROUP consumer group. Note: If need be, you can manually change the percentage of CPU resources allocated to the various automated maintenance task consumer groups inside ORA$AUTOTASK_HIGH_SUB_PLAN.
Oracle Database 11g: New Features for Administrators 7 - 27
Automated Maintenance Tasks feature is implemented by a background process: Autotask Background Process (ABP). ABP functions as an intermediary between automated tasks and the Scheduler. Its main purpose is to translate tasks into jobs for execution by the Scheduler. Just as important, ABP maintains history of execution of all tasks. ABP stores its private repository in SYSAUX tablespace. You can view this repository through DBA_AUTOTASK_TASK. ABP is spawned by MMON, typically at the start of a Maintenance Window. There is only one ABP required for all instances. Every 10 minutes, MMON check to see if ABP needs to be restarted in case it crashes. ABP determines the list of jobs that need to be created for each maintenance task. This list receives priority: urgent, high, or medium. Within each priority group, jobs are arranged in the preferred order of execution. ABP creates jobs in a round-robin manner; so all Urgent priority jobs are created first, then all High priority jobs, and finally all Medium priority jobs. Depending on the the task’s priority attribute (urgent, high, or medium), various Scheduler job classes are created to be able to map task’s priority consumer groups to corresponding job classes. Note: With Oracle Database 11g, there is no job that is permanently associated with a specific automated task. Therefore, it is not possible to use DBMS_SCHEDULER API to control the behavior of automated tasks. Instead, DBMS_AUTO_TASK_ADMIN package should be used.
Oracle Database 11g: New Features for Administrators 7 - 28
The Automatic Maintenance Task feature decides when and in what order tasks are performed. As a DBA, you can control the following: • If the maintenance window turns out to be inadequate for the maintenance workload, you can reconfigure the maintenance window. • You also have the ability to control the percentage of resources allocated to the automated maintenance tasks during each window. • You can enable/disable individual task in some or all Maintenance Windows. • In RAC environment, you have the ability to shift maintenance work to one or more instances by mapping maintenance work to a service. Enabling the service on a subset of instances will shift maintenance work to these instances. As shown on the slide, Enterprise Manager is the preferred way for Automatic Maintenance Tasks control. However, you can also use the DBMS_AUTO_TASK_ADMIN package.
Oracle Database 11g: New Features for Administrators 7 - 29
Important I/O Metrics for Oracle Databases Disk bandwidth
Important I/O Metrics for Oracle Databases To have a clear understanding of the type of I/O resource metrics that you need to know for tuning purposes in an Oracle Database environment, we need to briefly go through the type of I/Os issued by Oracle Database processes. The database I/O workload typically consists of small random I/Os and large sequential I/Os. The small random I/Os are more prevalent in an OLTP application environment where each foreground reads a data block into the buffer cache for updates and the changed blocks are written in batches by the dbwr process. Large sequential I/Os are common in an OLAP application environment. The OLTP application performance depends on how fast the small I/Os are serviced, which depends on how fast the disk can spin and seek to the data. The large I/O performance depends on the capacity of the I/O channel that connects the server to the storage array. The larger the capacity of the channel, the better the large I/O throughput. IOPS (I/Os per second): This metric represents the number of small random I/Os that can be serviced in a second. The IOPS rate mainly depends on how fast the disk media can spin. The IOPS rate from a storage array can be increased either by adding more disk drives or by using disk drives with a higher RPM (Rotations Per Minute) rate. MBPS (Mbytes per second): The rate at which data can be transferred between the computing server node and the storage array depends on the capacity of the I/O channel that is used to transfer data. The wider the pipe, the more data can be transferred.
Oracle Database 11g: New Features for Administrators 7 - 30
Important I/O Metrics for Oracle Databases (Continued) The throughput of a streaming data application depends on how fast this data can be accessed and is measured using the MBPS metric. Even though the disks themselves have an upper limit on the amount of sequential data they can transfer, it is often the channel capacity that limits the overall throughput of the system. For example, a host connected to a NAS server through a GigE switch is limited by a transfer capacity of 128 MBPS. Hence it becomes important to throttle based on this channel resource. I/O Latency: Latency is another important metric that is used in measuring the performance of an I/O subsystem. Latency represents the time it takes for a submitted I/O request to be serviced by the storage. Put it another way, it represents the fixed overhead before the first byte of a transfer arrives after an I/O request has been submitted. A higher latency usually indicates an overloaded system. Latency values are more commonly used for small random I/Os when tuning a system. If there are too many I/Os queued up against a disk, the latency increases. To improve the latency of I/O requests, data is usually striped across multiple spindles so that all I/O requests to a file do not go to the same disk. Apart from the main resources mentioned above, there are also other storage array components that can affect I/O performance. Array vendors provide caching mechanisms to improve read throughput, but their real benefit is questionable in a database environment because Oracle Database uses caches and read-ahead mechanisms so that data is available directly from RAM rather than disks.
Oracle Database 11g: New Features for Administrators 7 - 31
I/O Calibration and Enterprise Manager To determine the previously discussed important I/O metrics, you can use the I/O Calibration tool exposed through Enterprise Manager or PL/SQL in Oracle Database 11g. I/O Calibration is a modified version of the ORION tool. Because calibration requires issuing enough I/Os to saturate the storage system, any performance-critical sessions will be negatively impacted. Thus, you should run I/O calibration only when there is little activity on your system. I/O Calibration takes approximately ten minutes to run. You can launch an I/O Calibration task directly from Enterprise Manager as shown on the slide. You do this by accessing the Performance tab. On the Performance page, you can click the I/O tab and then the I/O Calibration button. Once on the I/O Calibration page, you need to specify the approximate number of physical disks attached to your database storage system as well as the maximum tolerable latency for a singleblock I/O request. Then, in the Schedule section of the I/O Calibration page, you can specify when to execute the calibration operation. You click the Submit button to create a corresponding Scheduler job. From the Scheduler Jobs page, you can see the amount of time it takes for the calibration task to run. Once finished, you can go back to the I/O Calibration page to see the results of the calibration operation which gives you the maximum I/O per second, maximum megabytes per second, and the maximum tolerable latency metrics.
Oracle Database 11g: New Features for Administrators 7 - 32
I/O Calibration and PL/SQL interface Alternatively, you can run the I/O Calibration task using PL/SQL interface. This is done when you execute the CALIBRATE_IO procedure from the DBMS_RESOURCE_MANAGER package. This procedure calibrates the I/O capabilities of storage. The calibration status and results are available from the V$IO_CALIBRATION_RESULTS table. Here is a brief description of the parameters you can specify for the CALIBRATE_IO procedure: • num_disks: Approximate number of physical disks in the database storage • max_latency: Maximum tolerable latency in milliseconds for database-block-sized IO requests • max_iops: Maximum number of I/O requests per second that can be sustained. The I/O requests are randomly-distributed, database-block-sized reads. • max_bps: Maximum throughput of I/O that can be sustained, expressed in megabytes per second. The I/O requests are randomly-distributed, one megabyte reads. • actual_latency: Average latency of database-block-sized I/O requests at max_iops rate, expressed in milliseconds Usage notes: • Only users with the SYSDBA privilege can run this procedure. • Only one calibration can be run at a time. If another calibration is initiated at the same time, it will fail. • For a RAC database, the workload is simultaneously generated from all instances. • The latency time is computed only when the initialization parameter TIMED_STATISTICS is set to TRUE (which is turned on when STATISTICS_LEVEL is set to TYPICAL or ALL). Oracle Database 11g: New Features for Administrators 7 - 33
I/O Statistics Overview To give a consistent set of statistics for all I/O’s issued from an Oracle instance, a set of virtual views are introduced with Oracle Database 11g which collect I/O statistics in three dimensions: • RDBMS components : Components are grouped by their functionality into 12 categories shown on the slide. • When Resource Management is enabled, I/O statistics are collected for all consumer groups that are part of the currently enabled resource plan. • I/O statistics are also collected for individual files (if it has been opened). Each dimension has statistics for read and write operations. Since read/write can occur in single block or multi block operations, they are separated into four different operations as shown on the slide. For each operation type, the number of corresponding requests and the amount of megabytes are cumulated. In addition to these, total I/O wait time in millisenconds and number of total waits are also cumulated for both components and consumer group statistics. For file statistics, total service time in microseconds is cumulated in addition to statistics for single block reads. Virtual views show cumulative values for statistics. Component and consumer group statistics are transformed into AWR metrics that are sampled regularly and stored in the AWR repository. You can retrieve those metrics across a timeline directly on the performance page of Enterprise Manager. Note: V$IOSTAT_NETWORK provides information about network I/O statistics that were caused by accessing files on a remote database instance. Oracle Database 11g: New Features for Administrators 7 - 34
I/O Statistics and Enterprise Manager You can retrieve I/O statistics directly on the Performance page in Enterprise Manager. On the Performance page, simply click on the I/O sub-tab located underneath the Average Active Session graph. On the I/O sub-page, you can see a breakdown of I/O statistics on three possible dimension: I/O Function, I/O Type, and Consumer Group. Click one of the radio buttons to look at the corresponding dimension graphs. The slide shows you the two graphs corresponding to the I/O function dimension: I/O Megabytes per Second per RDBMS component and I/O Requests per Second per RDBMS component. Note: The Other RDBMS component category corresponds to everything that is not directly issued from SQL (PL/SQL, Java).
Oracle Database 11g: New Features for Administrators 7 - 35
I/O Statistics and Enterprise Manager From one of the I/O statistic graphs, you can drill down to a specific component by clicking on that component. In the example shown on the slide, you drill down to the Buffer Cache Reads component. This takes you to the I/O Rates by I/O Function page where you can see the three important graphs for that particular component: MBPS, IOPS, and wait time.
Oracle Database 11g: New Features for Administrators 7 - 36
Resource Manager New EM Interface Using Enterprise Manager, you can access the Resource Manager section from the Server page. The Resource Manager section is organized in the way you should proceed to use Resource Manager. The first link in that section is called Getting Started. From the Getting Started With Database Resource Manager page, you can see a brief description of each step as well as links to the corresponding pages.
Oracle Database 11g: New Features for Administrators 7 - 37
Resource Manager Plans Created by Default When you create an Oracle 11g Database, by default, the above shown resource manager plans are created. However, none of them are active by default.
Oracle Database 11g: New Features for Administrators 7 - 38
Default Maintenance Resource Manager Plan SQL> show parameter resource_manager_plan NAME TYPE VALUE ---------------------- ------- -----------------------------------------resource_manager_plan string SCHEDULER[0x2843]: DEFAULT_MAINTENANCE_PLAN
Default Maintenance Resource Manager Plan When a maintenance window opens, DEFAULT_MAINTENANCE_PLAN resource manager plan is automatically set to control the amount of CPU used by automated maintenance tasks. To be able to give different priorities to each possible task during a maintenance window, various consumer groups are assigned to DEFAULT_MAINTENANCE_PLAN. The hierarchy between groups and plans is shown on the above slide. For high priority tasks: • Optimizer Statistics Gathering automatic task is assigned to the ORA$AUTOTASK_STATS_GROUP consumer group. • Segment Advisor automatic task is assigned to the ORA$AUTOTASK_SPACE_GROUP consumer group. • Automatic SQL Tuning automatic task is assigned to the ORA$AUTOTASK_SQL_GROUP consumer group. Note: If need be, you can manually change the percentage of CPU resources allocated to the various automated maintenance task consumer groups inside ORA$AUTOTASK_HIGH_SUB_PLAN.
Oracle Database 11g: New Features for Administrators 7 - 39
Default Plan The above slide shows you how DEFAULT_PLAN is created. Note that there are no limits for its thresholds. As you can see, Oracle Database 11g introduced two new I/O limits that you can define as thresholds in a resource manager plan.
Oracle Database 11g: New Features for Administrators 7 - 40
I/O Resource Limit Thresholds When you create a resource manager plan directive, you can specify I/O resource limits. The above example shows you how to do this in both Enterprise Manager and PL/SQL. You can specify the following two arguments: • switch_io_megabytes: Specifies the amount of I/O (in MB) that a session can issue before an action is taken. Default is NULL, which means unlimited. • switch_io_reqs: Specifies the number of I/O requests that a session can issue before an action is taken. Default is NULL, which means unlimited.
Oracle Database 11g: New Features for Administrators 7 - 41
You can also look at the Resource Manager Statistics page
Oracle Database 11g: New Features for Administrators 7 - 42
Scheduler New Features
• Scheduling Streams Propagation Jobs using Oracle Scheduler • Support for – Remote Jobs – Distributed Jobs – Lightweight jobs for performance (large number of jobs)
The main Enhancements to Scheduler in Oracle database 11g are listed. The feature on the scheduling propagation jobs is discussed in the Oracle Streams e-study
Oracle Database 11g: New Features for Administrators 7 - 43
Remote and Distributed Jobs Schedule
Remote jobs Jobs • Operating system level jobs • Scripts, binaries, etc • No Oracle database required • Agent starts and manages jobs Distributed jobs • Database jobs on other servers
Oracle Scheduler has added support for remote and multi-node jobs. It provides the ability of running a job on a host without a database Additionally users are allowed to provide a list of databases on which to execute a job. Creation and maintenance are done on a single database, but at run time exact replicas are executed on all the databases specified.
Oracle Database 11g: New Features for Administrators 7 - 44
The agent is a separately installable component but it is included in every database. During installation of the agent as part of the database there is no configuration necessary. If the agent installed as part of the database is required to run jobs from another database an additional step is necessary to register with that database and to start the agent in the background. During standalone installation the agent should be registered with at least one database. It is possible to automate this registration if the user is willing to include the database registration password in the installer file. This allows for silent automated installs. Optional information includes • Path to install the agent into • Whether to automatically start the agent • Whether to setup the agent to automatically start on every computer startup If after installation of the agent, another database is required to run jobs on the agent, the agent must be registered with that database.
Oracle Database 11g: New Features for Administrators 7 - 45
Running a remote JOB
1. 2. 3. 4.
7 - 46
Set the source for remote jobs for this database Set an agent registration password Install agents on 2 machines For every job to be run on the host, add it as a destination
The following steps are necessary to run a remote external job. 1. Set the source for remote jobs for this database (XML DB HTTP listener) with dbms_scheduler.set_scheduler_attribute('source','server.examp le.com:1234') where 'server.example.com:1234' is the server name and port for the XML DB HTTP listener 2. Set an agent registration password with dbms_scheduler.set_agent_registration_pass('password', max_uses=>2) 3. Install agents on 2 machines with agent-install database-password set to "password " 4. For every job to be run on the host, add it as a destination with 1. dbms_scheduler.add_job_destination('job1','host1','user1','p assword') 2. dbms_scheduler.add_job_destination('job1','host2','user2','p assword')
5. Once the job has been enabled there are two rows in the *_SCHEDULER_REMOTE_JOB_STATE views, one for each destination 6. The job runs on both destinations every time it is scheduled to run. For every run there are 2 entries in *_SCHEDULER_JOB_LOG and *_SCHEDULER_JOB_RUN_DETAILS views. 7. Once the job has been completed or has been disabled there are no rows in the *_SCHEDULER_REMOTE_JOB_STATE views.
Oracle Database 11g: New Features for Administrators 7 - 46
A remote database job refers to a collection of jobs with the following properties: • The jobs are created on one database. • The jobs share their metadata with the exception of run-related attributes, thus making the jobs copies of each other. • Each copy of the job executes on a different database, independently of the others. For example, a job may execute successfully several times on one database while failing to execute at all on another database. (The success or failure of a job on one database does not affect any of the other copies). • All copies of the job can be altered or manipulated from the database where the job was originally created. The copies of the job that are running on the various databases are, in effect, independent jobs. However, they are still linked in the sense that they can be manipulated as a group. The original job created by the user is called the parent job. To distinguish the copies of the job from other copied/cloned jobs, the copies are called job instances. The database on which a remote database job is created is called the source database of the job. The databases on which the job is executed are called destination databases. If the job is configured to execute on the database on which it was created, then that database is both the source database for the job as well as one of its destinations.
Oracle Database 11g: New Features for Administrators 7 - 47
MAX_CONCURRENT_JOBS: This is the absolute maximum number of jobs allowed to run on the host simultaneously. This only includes jobs run through and controlled by the agent. Default value is 100. If jobs run through the agent are taking up too much CPU, memory or IO, the administrator can reduce this number. MAX_CONCURRENT_JOBS_PER_USER: Allowable Values are1-1000. If multiple users run jobs on a remote host this parameter allows limiting the maximum number of jobs a single user can run simultaneously. Default value is 100. Since this is the same as the default value for MAX_CONCURRENT_JOBS it does not have any effect by default. If several users use a remote host and one is using more CPU/memory/IO than he should, this number can be reduced. LIMITING_CPU_THRESHOLD:Allowable Values are10-100. This is the CPU threshold, above which new jobs cannot run. Default value is 100 This value effectively turns limiting based on CPU usage off. If a remote host should never be completely loaded or should have CPU always reserved for another use then this value should be set to ensure this.
Oracle Database 11g: New Features for Administrators 7 - 48
Enhancements to Scheduler APIs • NEW DBMS_SCHEDULER Procedures – – – – – – – – –
CREATE_CREDENTIAL CREATE_CREDENTIAL( credential_name IN VARCHAR2, user IN VARCHAR2, password IN VARCHAR2, domain IN VARCHAR2 DEFAULT NULL, db_role IN VARCHAR2 DEFAULT NULL, comments IN VARCHAR2 DEFAULT NULL);
This is used to create a stored username/password pair called a credential. Credentials reside in a particular schema and can be created by any user with the CREATE JOB system privilege. DROP_CREDENTIAL DROP_CREDENTIAL( credential_name IN VARCHAR2, force IN BOOLEAN DEFAULT FALSE);
This is used to drop a stored username/password pair called a credential. To drop a public credential, the SYS schema must be explicitly given. Only a user with the MANAGE SCHEDULER system privilege is able to drop a public credential. For a regular credential only the owner of the credential or a user with the CREATE ANY JOB system privilege is able to drop the credential. SET_AGENT_REGISTRATION_PASS SET_AGENT_REGISTRATION_PASS( registration_password IN VARCHAR2, expiration_date IN TIMESTAMP DEFAULT NULL, max_uses IN NUMBER DEFAULT 1);
Oracle Database 11g: New Features for Administrators 7 - 49 This is used to set the agent registration password for a database. Remote agents must register with the database before the database can submit jobs to the agent. To prevent abuse, this
Dictionary Views
• New Views: – *_SCHEDULER_CREDENTIALS – *_SCHEDULER_JOB_DESTINATIONS – *_SCHEDULER_PREFERRED_CREDS
• These views are modified to contain the following additional columns. – – – –
• The following dictionary views have been added: - *_SCHEDULER_CREDENTIALS:This lists all regular credentials in the current user's schema. - *_SCHEDULER_JOB_DESTINATIONS:This is a list of all destinations for all jobs in the current schema. If a job does not have any destinations specified, it runs only on the local database or host. - *_SCHEDULER_PREFERRED_CREDS: This is a list of all preferred credentials for the current schema. If target is NULL then this credential is valid for all targets for jobs in the current schema. • *_SCHEDULER_JOBS: These views are modified to contain the following additional columns: - number_of_destinations NUMBER NOT NULL Number of destinations specified - single-destination VARCHAR2(5) NOT NULL Whether this should run on one of the destinations or all of them. - credential_name VARCHAR2(30) Name of the credential to be used for an external job - credential_owner VARCHAR2(30) Owner of the credential to be used for an external job - working_directory VARCHAR2(1000) Initial directory for an external job - input VARCHAR2(4000) String to be provided as standard input to an external jobDatabase 11g: New Features for Administrators 7 - 50 Oracle - environment_variables VARCHAR2(4000) Semicolon-separated list of name-value pairs to be set as environment variables for an external job
Light Weight Jobs
• Persistent lightweight jobs – Little metadata – Recoverable
• Volatile lightweight jobs – No metadata – Non recoverable
A lightweight job as one that has the following characteristics. • Not based on database objects like current scheduler jobs • A significantly lower creation overhead than current scheduler jobs • A significantly lower average session creation time than current scheduler jobs. • Less redo generated during runs than current scheduler jobs or no redo at all. The Oracle Database 11g allows the creation of two types of lightweight jobs. • The first type of lightweight has a small footprint on disk for lightweight job metadata and also for storing run-time data. The footprint on disk also makes recovery possible and makes load-balancing possible in RAC environments. These are called persistent lightweight jobs. • The second type of lightweight jobs is called volatile lightweight jobs. These jobs may or may not have their metadata written to disk and certainly don't have state written to disk at run time. Recovery from crashes is not possible and neither is load-balancing across RAC instances. In addition to the two types of lightweight jobs, the database continues to support the database object-based jobs that have existed since the Oracle Scheduler was first introduced in Oracle 10g. Lightweight jobs are not intended to replace these jobs as each of the types of jobs described above have their own advantages and provide you the flexibility to choose one or another based on your needs.
Oracle Database 11g: New Features for Administrators 7 - 51
Choosing the Right Job
•
Regular job – Highest Overhead – Best recovery
•
Persistent lightweight job – Less Overhead – Some recovery
•
Volatile lightweight jobs – Minimum overhead – No recovery
The advantages and disadvantages of the three types of jobs are as follows • A regular job offers the maximum flexibility but does entail a significant overhead in create/drop performance. They can be created with a single command, the user can have finegrained control of the privileges on the job. He can also use a program or a stored procedure owned by another user. The downside is, as mentioned before is slow create and drop time because of the overhead necessitated by database objects. If the user is creating a relatively small number of jobs that run relatively infrequently then he should choose regular jobs. • A persistent lightweight job has a significant improvement in create and drop time since it does not have the overhead of creating a database object. Since persistent lightweight jobs write state to disk at run-time, their run-time overhead isn’t likely to be much better than for regular jobs but these is a small improvement here. There are several drawbacks to persistent lightweight jobs. First, the user cannot set privileges on these jobs - they inherit their privileges from the parent job template; since the use of a template is mandatory, it is not possible to create a fully self-contained persistent lightweight job. If the user needs to create a large number of jobs in a very short time (from 10-100 jobs a second) and he has a library of programs available that he can use then he should use persistent lightweight jobs. • Volatile lightweight jobs write as little to disk as possible. Creates and drops may not be written to disk at all and no state is written at run time. Thus the creation overhead is even lesser than that for persistent lightweight jobs and there is minimal run-time overhead. On the downside, volatile lightweight jobs share all the drawbacks of persistent lightweight jobs. In addition, they have additional drawbacks in that they cannot be recovered if the database crashes and cannot be load-balanced across RAC instances. Volatile jobs should be used in those situations the user is creating very large of frequently jobs Oracle where Database 11g: New Features fornumbers Administrators 7 -executing 52 and wants to bring overhead (both CPU as well as redo) to an absolute minimum.
New PL/SQL APIs • New DBMS_SCHEDULER method – The CREATE_LIGHTWEIGHT_JOB method
• Methods that work on lightweight jobs: – – – – – –
Lightweight jobs are created modified and dropped using DBMS_SCHEDULER APIs. There are minimal changes to the APIs – other than for creating lightweight jobs, no new APIs have been added. The CREATE_LIGHTWEIGHT_JOB call has been added to create lightweight jobs. Most calls to manipulate regular jobs work on lightweight jobs. The methods SET_ATTRIBUTE, GET_ATTRIBUTE, SET_JOB_ARGUMENT_VALUE, RESET_JOB_ARGUMENT_VALUE, STOP_JOB, DROP_JOB and RUN_JOB work with light weight jobs. The job-related dbms_scheduler calls that do not apply to lightweight jobs are CREATE_JOB (which is used only to create regular jobs) and SET_JOB_ANYDATA_VALUE since lightweight jobs cannot take ANYDATA arguments.
Oracle Database 11g: New Features for Administrators 7 - 53
Viewing Light Weight Jobs in Dictionary
• No new views are added • Lightweight jobs are visible *_SCHEDULER_JOBS views • Arguments to lightweight jobs are visible through *_SCHEDULER_JOB_ARGS views • Light Weight jobs are not visible through *_OBJECTS views
The changes to dictionary views are as follows: • No new views are added. • Lightweight jobs are visible through the same views as regular jobs are – DBA_SCHEDULER_JOBS, ALL_SCHEDULER_JOBS and USER_SCHEDULER_JOBS. • Arguments to lightweight jobs are visible through the same views as those of regular jobs – DBA_SCHEDULER_JOB_ARGS, ALL_SCHEDULER_JOB_ARGS and USER_SCHEDULER_JOB_ARGS. • Since lightweight jobs are not database objects, they are not visible through the DBA_OBJECTS, ALL_OBJECTS and USER_OBJECTS views.
Oracle Database 11g: New Features for Administrators 7 - 54
Summary
In this lesson, you should have learned how to: • Bla • Bla
Oracle Database 11g: New Features for Administrators 8 - 3
Advisor Named Findings
• Advisor results are now classified and named – Exist in DBA{USER}_ADVISOR_FINDINGS view
• You can query all finding names from DBA_ADVISOR_FINDING_NAMES view: SQL> select finding_name from dba_advisor_finding_names; FINDING_NAME ---------------------------------------Top Segments by I/O Top SQL by "Cluster" Wait . . . Undersized Redo Log Buffer Undersized SGA Undersized Shared Pool Undersized Streams Pool 8-4
Oracle Database 10g introduced the advisor framework and various advisors to help DBAs manage databases efficiently. These advisors provide feed back in the form of findings. Oracle database 11g now classifies these findings, so that you can query the Advisor views to understand how often a given type of finding is recurring in the database. A FINDING_NAME column has been added to the following Advisor views: • DBA_ADVISOR_FINDINGS • USER_ADVISOR_FINDINGS A new DBA_ADVISOR_FINDING_NAMES view displays all the finding names.
Oracle Database 11g: New Features for Administrators 8 - 4
Self-Diagnostic Engine In the Database Integrate all components together Automatically provides database-wide performance diagnostic, including RAC Real-time results using the Time Model Provides impact and benefit analysis, non problem areas Provides Information vs. raw data Runs proactively out of the box, reactively when required
Top Down Analysis Using AWR Snapshots Throughput centric - Focus on reducing time ‘DB time’ Classification Tree - based on decades of Oracle performance tuning expertise Real-time results – don’t need to wait hours to see the results) Pinpoints root cause: Distinguishes symptoms from the root cause Reports non-problem areas, e.g. I/O is not a problem
Oracle Database 11g: Automatic Database Diagnostic Monitor for RAC Oracle Database 11g offers an extension to the set of functionality that increases the database’s manageability by offering cluster-wide analysis of performance. A special mode of the Automatic Database Diagnostic Monitor (ADDM) analyzes an Oracle Real Application Clusters(RAC) database cluster and reports on issues that are affecting the entire cluster as well as those that are affecting individual instances. This mode is called Database ADDM as opposed to Instance ADDM that already exist with Oracle Database 10g. Database ADDM for RAC is not just a report of reports but it has independent analysis appropriate for RAC.
Oracle Database 11g: New Features for Administrators 8 - 8
Automatic Database Diagnostic Monitor for RAC • Identifies the most critical performance problems for the entire RAC cluster database • Runs automatically when taking AWR snapshots (the default) • Performs database-wide analysis of: – – – – –
Global resources, for example IO, global locks High-load SQL, hot blocks Global cache interconnect traffic Network latency issues Skew in instance response times
Automatic Database Diagnostic Monitor for RAC In Oracle Database 11g you can create a period analysis mode for ADDM that analyzes the throughput performance for an entire cluster. When the advisor runs in this mode it is called Database ADDM. You can run the advisor for a single instance which is equivalent to the Oracle Database 10g ADDM and is now called instance ADDM. The instance ADDM has access to AWR data generated by all instances making the analysis of global resources more accurate. Both database and instance ADDM run on continuous time periods that can contain instance startup and shutdown. In the case of database ADDM there may be several instances that are shutdown or started during the analysis period. You must maintain the same database version throughout the entire time period however. Database ADDM runs automatically after each snapshot is taken. The automatic instance ADDM runs are the same as in Oracle Database 10g. You can also perform analysis on a subset of instances in the cluster. This is called partial analysis ADDM. I/O capacity finding (the I/O system is overused) is a global finding since it concerns a global resource affecting multiple instances. A local finding concerns a local resource or issue that affects a single instance. For example, a CPU bound instance results in a local finding about CPU. While ADDM can be used during application development to test changes to either the application, the database system or the hosting machines, database ADDM is targeted at DBAs.
Oracle Database 11g: New Features for Administrators 8 - 9
Automatic Database Diagnostic Monitor for RAC • Specified in DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER procedure: Value of INSTANCE
Value of INSTANCES
ADDM Analysis Mode
‘0’ or ‘UNUSED’(default)
‘UNUSED’(default)
Database ADDM (all instances)
‘0’ or ‘UNUSED’(default)
Comma separated list of instance numbers (1,2,5..)
Partial analysis ADDM. Only instances specified in the INSTANCES parameter are analyzed.
A positive integer (eg. ‘1’)
Any value
Instance ADDM. The instance specified in the INSTANCE parameter is analyzed.
Automatic Database Diagnostic Monitor for RAC The distinction between database and instance ADDM is based on the value of the advisor parameter ‘INSTANCE’. When the value is ‘ALL’ the task is a database ADDM. When the value is numeric, it is the instance ID for a instance ADDM task. The results of an ADDM analysis are stored in the advisor framework and accessed like any ADDM task in Oracle Database 10g. You select to run database ADDM, instance ADDM or partial analysis by setting the parameters INSTANCE and INSTANCES in the DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER procedure. Note: Partial ADDM is not currently exposed through EM but command line PL/SQL APIs exists to do partial analysis. Use DBMS_ADDM instead!
Oracle Database 11g: New Features for Administrators 8 - 10
EM Support For ADDM for RAC Oracle Database 11g EM displays the ADDM analysis on the Cluster Database Home Page. The Findings Table is displayed in the Performance Analysis section. For each finding, the Affected Instances column displays the number (n of n) of instances affected. The display also indicates the percentage impact for each instance. Further drill down on the findings takes you to the ADDM Findings Detail page.
Oracle Database 11g: New Features for Administrators 8 - 11
EM Support For ADDM for RAC • Finding History Page:
EM Support For ADDM for RAC The ADDM Finding Details Page allows you to see the Finding History. When you click on this button you see a page with a chart on the top plotting the impact in active sessions for the finding over time. The default display period is 24 hrs. The drop down also supports viewing for seven days. At the bottom of the display, a table similar to the results section is shown, displaying all findings for this named finding. From this page, you can set filters on the findings results. Different types of findings (e.g. CPU, Logins, SQL) have different kinds of criteria for filtering. Note: Only automatic runs of ADDM are considered for the Finding History. These results reflect the unfiltered results only.
Oracle Database 11g: New Features for Administrators 8 - 12
Using the DBMS_ADDM Package
• A database ADDM task is created and executed: SQL> var tname varchar2(60); SQL> BEGIN SQL> :tname := ‘my database ADDM task’; SQL> dbms_addm.analyze_db(:tname, 1, 2); SQL> END;
• Use GET_REPORT procedure to see the result: SQL> SELECT dbms_addm.get_report(:tname) FROM DUAL;
Using the DBMS_ADDM Package The DBMS_ADDM package eases ADDM management. It consists of the following procedures and functions: • ANALYZE_DB: Creates an ADDM task for analyzing the database globally. • ANALYZE_INST: Creates an ADDM task for analyzing a local instance. • ANALYZE_PARTIAL: Creates an ADDM task for analyzing a subset of instances. • DELETE: Deletes a created ADDM task (of any kind). • GET_REPORT: Get the default text report of an executed ADDM task. • Parameters 1,2 are start and end snapshot
Oracle Database 11g: New Features for Administrators 8 - 13
Advisor Named Findings and Directives
• Advisor results are now classified and named – Exist in DBA{USER}_ADVISOR_FINDINGS view
• You can query all finding names from DBA_ADVISOR_FINDING_NAMES view: SQL> select finding_name from dba_advisor_finding_names; FINDING_NAME ---------------------------------------Top Segments by I/O Top SQL by "Cluster" Wait . . . Undersized Redo Log Buffer Undersized SGA Undersized Shared Pool Undersized Streams Pool 8 - 14
Advisor Named Findings and Directives Oracle Database 10g introduced the advisor framework and various advisors to help DBAs manage databases efficiently. These advisors provide feed back in the form of findings. Oracle database 11g now classifies these findings, so that you can query the Advisor views to understand how often a given type of finding is recurring in the database. A FINDING_NAME column has been added to the following Advisor views: • DBA_ADVISOR_FINDINGS • USER_ADVISOR_FINDINGS A new DBA_ADVISOR_FINDING_NAMES view displays all the finding names.
Oracle Database 11g: New Features for Administrators 8 - 14
Using the DBMS_ADDM Package • Create an ADDM directive which filters Undersized SGA findings: SQL> SQL> 2 3 4 5 6 7 8 9 10 SQL>
var tname varchar2(60); BEGIN dbms_addm.insert_finding_directive (NULL, 'My undersized SGA directive', 'Undersized SGA', 2, 10); :tname := 'my instance ADDM task'; dbms_addm.analyze_inst(:tname, 1, 2); END; / SELECT dbms_addm.get_report(:tname) from dual;
• Possible findings found in DBA_ADVISOR_FINDING_NAMES 8 - 15
Using the DBMS_ADDM Package You can use possible finding names to query the findings repository to get all occurrences of that specific finding. Above you see the creation of a instance ADDM task with a finding directive. When the task name is NULL it applies to all subsequent ADDM tasks. The finding name (“Undersized SGA”) must exist in the DBA_ADVISOR_FINDING_NAMES view (which lists all the findings) and is case sensitive. The result of DBMS_ADDM.GET_REPORT only shows an ‘Undersized SGA’ finding if the finding is responsible for at least 2 (min_active_sessions) average active sessions during the analysis period, and this constitutes at least 10% (min_perc_impact) of the total database time during that period. Further PL/SQL directive procedures are: • INSERT_FINDING_DIRECTIVE:Create a directive to limit reporting of a specific finding type. • INSERT_SQL_DIRECTIVE: Create a directive to limit reporting of actions on specific SQL. • INSERT_SEGMENT_DIRECTIVE:Create a directive to prevent ADDM from creating actions to “run Segment Advisor” for specific segments. • INSERT_PARAMETER_DIRECTIVE:Create a directive prevent ADDM from creating actions to alter the value of a specific system parameter. • Long syntax for parameters would help here again • Directives are reported if you specify all
Oracle Database 11g: New Features for Administrators 8 - 15
Using the DBMS_ADDM Package
• The following lists the procedures to add directives: – – – –
Using the DBMS_ADDM Package Note: For a complete description of available procedures please see the Oracle Database 11g PL/SQL References and Types documentation.
Oracle Database 11g: New Features for Administrators 8 - 16
Modified Advisor Views New column FILTERED
Description ‘Y’ means that the row in the view was filtered out by a directive (or combination of directives). ‘N’ means that the row was not filtered.
Modified Advisor Views The views containing advisor findings, recommendations and actions have been enhanced by adding the above column.
Oracle Database 11g: New Features for Administrators 8 - 17
New ADDM Views
• DBA{USER}_ADDM_TASKS: Displays every executed ADDM task. Are extensions of the corresponding Advisor views. • DBA{USER}_ADDM_INSTANCES: Displays instance level information for ADDM tasks that completed. • DBA{USER}_ADDM_FINDINGS: Are extensions of the corresponding Advisor views. DBA{USER}_ADDM_FDG_BREAKDOWN: Displays the contribution for each finding from the different instances for database and partial ADDM.
With ASMM five important SGA components can be automatically tuned. Special buffer pools are not auto-tuned Log buffer is a static component but has a good default Auto-tuned parameters
Oracle Database 10g SGA Parameters As shown on the slide, the five most important pools are automatically tuned when Automatic Shared Memory Management (ASMM) is activated. These parameters are called auto tuned parameters. The second category called manual dynamic parameters are parameters that can be manually resized without having to shutdown the instance, but that are not automatically tune by the system. The last category represents the parameters that are fixed in size, and cannot be resized without shutting down the instance first.
Oracle Database 11g: New Features for Administrators 8 - 19
Oracle Database 10g PGA Parameters Automatic SQL Execution Memory Management • PGA_AGGREGATE_TARGET: – Specifies the target aggregate amount of PGA memory available to the instance – Can be dynamically modified at the instance level – Examples: 100000K, 2500M, 50G – Default value is greater of 10 MB and 20% of SGA size
• WORKAREA_SIZE_POLICY: – Optional – Can be dynamically modified at the instance or session level – Allows you to fallback to static SQL memory management for a particular session
Oracle Database 10g PGA Sizing Parameters Status PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value automatically sets the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators are automatically sized. A nonzero value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater. Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL work areas are sized using the *_AREA_SIZE parameters. Keep in mind that PGA_AGGREGATE_TARGET is not set in stone. It is used to help the system better manage PGA memory, but the system will exceed this setting if necessary. WORK_AREA_SIZE_POLICY can be altered per database session, allowing manual memory management on a per session basis if needed. For example, a session is loading a large import file and a rather large sort_area_size is needed. A logon trigger could be used to set the WORK_AREA_SIZE_POLICY for the account doing the import. If WORK_AREA_SIZE_POLICY is AUTO and PGA_AGGREGATE_TARGET is set to 0, we throw an external error ORA-04032 at startup. Note: Until Oracle 9i Release 2, PGA_AGGREGATE_TARGET controls the sizing of work areas for all dedicated server connections, but it has no effect on shared servers connections and the *_AREA_SIZE parameters take precedence in this case. In Oracle Database 10g, PGA_AGGREGATE_TARGET controls work areas allocated by dedicated and shared connections.
Oracle Database 11g: New Features for Administrators 8 - 20
Oracle Database 10g Memory Advisors • Buffer Cache Advice (introduced in 9iR1): – V$DB_CACHE_ADVICE – Predicts physical read times for different cache sizes
• Shared Pool Advice (in 9iR2): – V$SHARED_POOL_ADVICE – Predicts parse times for different sizes of shared pool
• Java Pool Advice (in 9iR2): – V$JAVA_POOL_ADVICE – Predicts java class load time for java pool sizes
• Streams Pool Advice (10gR2) – V$STREAMS_POOL_ADVICE – Predicts spill and unspill activity time for various sizes
Oracle Database 10g Memory Advisors In order to help you size the most important SGA components, a number of advisories have been introduced in the Oracle database. They are listed on the slide: • V$DB_CACHE_ADVICE contains rows that predict the number of physical reads and time for the cache size corresponding to each row. • V$SHARED_POOL_ADVICE displays information about estimated parse time in the shared pool for different pool sizes. • V$JAVA_POOL_ADVICE displays information about estimated class load time into the Java pool for different pool sizes. • V$STREAMS_POOL_ADVICE displays information about the estimated count of spilled or unspilled messages and the associated time spent in the spill or unspill activity for different Streams pool sizes. Note: For more information about these views, refer to the Oracle Database Reference guide.
Oracle Database 11g: New Features for Administrators 8 - 21
Oracle Database 10g Memory Advisors • SGA Target Advice (introduced in 10gR2): – V$SGA_TARGET_ADVICE view – Estimates the DB time for different SGA sizes based on current size
• PGA Target Advice (introduced in 9iR1): – V$PGA_TARGET_ADVICE view – Predicts the PGA cache hit ratio for different PGA sizes – Time column EST_TIME added in 11gR1
• For all advisors, STATISTICS_LEVEL must be set to at least TYPICAL
Oracle Database 10g Memory Advisors • In Oracle Database 10g, the SGA Advisor shows the improvement in DB Time that can be achieved for a particular setting of the total SGA size. This advisor allows you to reduce trial and error in setting the SGA size. The advisor data is stored in the V$SGA_TARGET_ADVICE view. • V$PGA_TARGET_ADVICE predicts how the pga cache hit percentage displayed by the V$PGASTAT performance view would be impacted if the value of the PGA_AGGREGATE_TARGET parameter is changed. The prediction is performed for various values of the PGA_AGGREGATE_TARGET parameter, selected around its current value. The advice statistic is generated by simulating the past workload run by the instance. In 11g a new column is added: EST_TIME corresponding to the CPU and IO time it takes to process the bytes.
Oracle Database 11g: New Features for Administrators 8 - 22
Automatic Memory Management Overview With Automatic Memory Management, the system causes an indirect transfer of memory from SGA to PGA and vice versa. It automates the sizing of PGA and SGA according to your workload. This indirect memory transfer relies on OS mechanism of freeing shared memory. Once memory released to OS, the other components can allocate memory by requesting memory from OS. Currently, this is implemented on Linux, Solaris, HPUX, AIX and Windows. Basically, you set your memory target for the database instance, and the system then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and aggregate program global area (PGA). The slide shows you the differences between the Oracle Database 10g mechanism and the new Automatic Memory management with Oracle Database 11g.
Oracle Database 11g: New Features for Administrators 8 - 23
Oracle 11g Database Memory Sizing Parameters The above graphic shows you the memory initialization parameters hierarchy. Although you only have to set MEMORY_TARGET to trigger Automatic Memory Management, you still have the possibility to set lower bound values for various caches.So if the child parameters are user set, they will be the minimum values below which we will not auto-tune that component.
Oracle Database 11g: New Features for Administrators 8 - 24
Automatic Memory Management Overview The simplest way to manage memory is to allow the database to automatically manage and tune it for you. To do so (on most platforms), you set only a target memory size initialization parameter (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you cannot accidentally set the target memory size too high. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the database also prevents you from setting the target memory size too low.
Oracle Database 11g: New Features for Administrators 8 - 25
Auto Memory Parameter Dependency Y
N
MMT=0
MT>0
Y
MMT=MT
N
MT=0
N
MMT>0 ST>0 & PAT>0
Y
Y
MT=0
ST>0
MT can be dynamically changed later
Y
SGA & PGA are separately auto tuned
Minimum possible values
N
ST>0 & PAT=0
ST+PAT<=MT<=MMT
Y
PAT=MT-ST
N Y
ST=0 & PAT>0
ST=min(MT-PAT,SMS)
N
Only PGA is auto tuned
N
SGA and PGA cannot grow and shrink automatically
ST=60%MT PAT=40%MT
Both SGA and PGA can grow and shrink automatically
Auto Memory Parameter Dependency The above flowchart describes the relationships between the various memory sizing parameters. If MEMORY_TARGET is set to a non-zero value: • If SGA_TARGET and PGA_AGGREGATE_TARGET are set, they will be considered the minimum values for the sizes of SGA and the PGA respectively. MEMORY_TARGET can take values from SGA_TARGET + PGA_AGGREGATE_TARGET to MEMORY_MAX_SIZE. • If SGA_TARGET is set and PGA_AGGREGATE_TARGET is not set, we will still autotune both parameters. PGA_AGGREGATE_TARGET will be initialized to a value of (MEMORY_TARGET-SGA_TARGET). • If PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set, we will still autotune both parameters. SGA_TARGET will be initialized to a value of min(MEMORY_TARGET-PGA_AGGREGATE_TARGET, SGA_MAX_SIZE (if set by the user)) and will auto-tune sub-components. • If neither is set, they will be auto-tuned without any minimum or default values. We will have a policy of distributing the total server memory in a fixed ratio to the the SGA and PGA during initialization. The policy is to give 60% for SGA and 40% for PGA at startup.
Oracle Database 11g: New Features for Administrators 8 - 26
Automatic Memory Parameter Dependency (Continued) If MEMORY_TARGET is not set or set to set to 0 explicitly (default value is 0 for 11g): • If SGA_TARGET is set we will only auto-tune the sizes of the sub-components of the SGA. PGA will be autotuned independent of whether it is explicitly set or not. Though the whole SGA(SGA_TARGET) and the PGA(PGA_AGGREGATE_TARGET) will not be auto-tuned, i.e., will not grow or shrink automatically. * If neither SGA_TARGET nor PGA_AGGREGATE_TARGET is set, we will follow the same policy as we have today; PGA will be auto-tuned and the SGA will not be auto-tuned and parameters for some of the sub-components will have to be set explicitly (for SGA_TARGET). • If only MEMORY_MAX_TARGET is set, MEMORY_TARGET will default to 0 in manual setup using text initialization file. Auto-tuning will default to 10gR2 behavior for sga and pga. • If sga_max_size is not user set, we will internally set it to MEMORY_MAX_TARGET, if user set (independent of sga_target being user set). In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a non-zero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.
Oracle Database 11g: New Features for Administrators 8 - 27
Enabling Automatic Memory Management Note: The above terminology is being revamped (given that it is BETA!). ‘Current Total Memory Size’ should read ‘Current Total Memory Size for Auto-tuning’. You can enable Automatic Memory Management using Enterprise Manager as shown above. From the Database Home page, click the Server tab. On the Server page, click the Memory Parameters link in the Database Configuration section. This takes you to the Memory Parameters page. On this page, you can click the Enable button to enable Automatic Memory Management. The value in the ‘Total Memory Size for Automatic Memory Tuning’ text box is set by default to current SGA+PGA size. You can set it to anything more than this but less than the value in ‘Maximum Memory Size’ box. Note: On the Memory Parameters page, you also have the possibility to specify the Maximum Memory Size. If you change this field, the database is automatically restarted for your change to take effect.
Oracle Database 11g: New Features for Administrators 8 - 28
Monitor Automatic Memory Management Once Automatic Memory Management is enabled, you can see a new graphical representation of the history of your memory size components in the Allocation History section of the Memory Parameters page. The green part in the first histogram is Tunable PGA only and the brownishorange part is all of SGA. The dark blue below in the lower histogram is the Shared Pool size and light blue corresponds to Buffer Cache. The change above shows you the possible repartition of your memory after the execution of an untunable PGA consuming PL/SQL program. Hence both SGA and PGA might shrink to take into account the Untunable portion consuming the extra memory. Note that with SGA shrink, its subcomponents also shrank around the same time. On this page, you can also access the memory target advisor by clicking the Advice button. This advisor gives you the possible DB time improvement for various total memory sizes. Note: You can also look at the memory target advisor using the V$MEMORY_TARGET_ADVISOR view.
Oracle Database 11g: New Features for Administrators 8 - 29
Monitor Automatic Memory Management
If you wish to monitor the decisions made by Automatic Memory Management via command line: • V$MEMORY_DYNAMIC_COMPONENTS has the current status of all memory components • V$MEMORY_RESIZE_OPS has a circular history buffer of the last 800 SGA resize requests • All SGA and PGA equivalents still in place for backward compatibility
DBCA and Automatic Memory Management 11gR1 DBCA has new options to accommodate Auto-memory. Use the Memory tab of the Initialization Parameters screen to set the initialization parameters that control how the database manages its memory usage. You can choose from two basic approaches to memory management: • Typical, which requires very little configuration and allows the database to manage how it uses a percentage of your overall system memory. Select Typical to create a database with minimal configuration or user input. This option is sufficient for most environments and for DBAs who are inexperienced with advanced database creation procedures. Enter a value in the Percentage field. This value represents a percentage of your total available system memory (shown in parenthesis) that will be allocated to the Oracle Database. Based on this value, DBCA allocates the most efficient amount of memory to the database memory structures. Click Show Memory Distribution to see how much memory DBCA will assign to both the SGA and PGA. Note that the memory allocation also includes another 40MB, which is required by the operating system to run the database executable. • Custom, which requires more configuration, but provides you with more control over how the database uses the available system memory. To allocate specific amounts of memory to the SGA and PGA, select Automatic. To customize how the SGA memory is distributed among the SGA memory structures (buffer cache, shared pool, …), select Manual and enter specific values for each SGA subcomponents. You can review and modify these initialization parameters later in DBCA. Note: When using DBUA or manual DB creation, the memory_target parameter defaults to 0.
Oracle Database 11g: New Features for Administrators 8 - 31
Summary
• Unifies system (SGA) and process (PGA) memory management • Single dynamic parameter for all database memory • Automatically adapts to workload changes • Maximizes memory utilization • Helps eliminate out-of-memory errors
Statistic Preferences Overview The automated statistics-gathering feature was introduced in Oracle Database 10g Release 1 to reduce the burden of maintaining optimizer statistics. However, there were cases where you had to disable it and run your own scripts instead. One reason was the lack of object level control. Whenever you found a small subset of the objects for which the default gather statistic options did not work well, you had to lock the statistics and analyze them separately using your own options. For example, the feature that automatically tries to determine adequate sample size (ESTIMATE_PERCCENT=AUTO_SAMPLE_SIZE) does not work well against columns that contain data with very high frequency skews. The only way to get around this issue in that case was to manually specify the sample size in your own script. The Statistic Preferences feature in Oracle Database 11g introduces flexibility so that you can rely more on the automated statistics-gathering feature to maintain the optimizer statistics when some objects require settings that are different from the database default. This feature allows you to associate statistics gathering options that override the default behavior of the GATHER_*_STATS procedures and the automated Optimizer Statistics Gathering task at the object or schema level. As a DBA, you can use the DBMS_STATS package to manage the above shown gathering statistic options. Basically, you can set, get, delete, export, and import those preferences at the table, schema, database, and global level. Global preferences are used for tables that do not have any preferences whereas database preferences are used to set preferences on all tables. The preference values specified in various ways take precedence from the outer circles to the inner ones as shown on the above slide. The last three highlighted options are new in Oracle Database 11g Release 1: Oracle Database 11g: New Features for Administrators 8 - 33 • PUBLISH is used to decide whether or not to publish the statistics to the dictionary or to store them in a private area before.
Setting Global Preferences With Enterprise Manager
Setting Global Preferences With Enterprise Manager It is possible to control global preference settings using Enterprise Manager. You can do so from the Statistics Options page. You can access this page from the Database Home page by clicking the Server tab, then the Manage Optimizer Statistics link, and then the Statistics Options link. Once on the Statistics Options page, you can change the global preferences from the Gather Optimizer Statistics Default Options section. Once done, click the Apply button.
Oracle Database 11g: New Features for Administrators 8 - 34
Partitioned Tables and Incremental Statistics ck a db Overview fee
n he ted n w e e ot le m N p e im GRANULARITY=GLOBAL% & INCREMENTAL=FALSE ov m 02 e 3 R 58
Partitioned Tables and Incremental Statistics Overview For a partitioned table, the system maintains both the statistics on each partition and the overall statistics for the table. Generally, if the table is partitioned on a range, very few partitions go through data modifications (DML). For example, suppose we have a table that stores the sales transactions. We partition the table on sales date with each partition containing transactions for a quarter. Most of the DML activity happens on the partition that stores the transactions of the current quarter. The data in other partitions remain unchanged. Currently the system keeps track of DML monitoring information at table and (sub)partition level. Statistics are gathered only for those partitions (in the above example, the partition for the current quarter) that are significantly changed (current threshold is 10%) since last statistics gathering. However, global statistics are gathered by scanning the entire table, which makes global statistics very expensive on partitioned tables especially when some partitions are stored in slow devices and not modified often. Oracle Database 11g can expedite the gathering of certain global statistics like the number of distinct values. In contrast to the traditional way of scanning the entire table, there is a new possible mechanism to maintain certain global statistics by scanning only those partitions that have been changed and still make use of the statistics gathered before for those partitions that are unchanged. In short, these global statistics can be maintained incrementally. DBMS_STATS package currently allows you to specify the granularity on a partitioned table. For example, you can specify auto, global, global and partition, all, partition, and subpartition. If the granularity specified includes GLOBAL and the table is marked as INCREMENTAL for its gathering options, the global statistics are gathered using the incremental mechanism. Moreover, statistics for changed partitions are gathered as well, no matter whether you specified Oracle 11g: PARTITION in theDatabase granularity or not. New Features for Administrators 8 - 35 Note: The new mechanism does not incrementally maintain histograms and density global
Hash-based Sampling for Column Statistics
• Computing columns statistics is the most expensive step in statistics gathering • The row sampling technique gives inaccurate results with skewed data distribution • New approximate counting technique used when ESTIMATE_PERCCENT is set to AUTO_SAMPLE_SIZE – You are encouraged to use AUTO_SAMPLE_SIZE
Hash-based Sampling for Column Statistics For query optimization, it is essential to have a good estimate of the number of distinct values. By default, and without histograms, the optimizer uses the number of distinct values to evaluate the selectivity of a predicate of a column. The algorithm used in Oracle Database 10g computes the number of distinct values with a SQL statement counting the number of distinct values found on a sample of the underlying table. With Oracle Database 10g when gathering column statistics you have two choices: 1. Use a small sample size, which leads to less accurate results but had a short execution time. 2. Use a large sample or full scan, which leads to very accurate results but had a very long execution time. In Oracle Database 11g we have a new method for gathering column statistics that provides similar accuracy to a scan with the execution time of a small sample (1-5%). This new technique is used when you invoke a procedure from DBMS_STATS with ESTIMATE_PERCENT gathering option set to AUTO_SAMPLE_SIZE, which is the default value. The row sampling based algorithm will be used for collection of number of distinct values if you specify any value other than AUTO_SAMPLE_SIZE. This is to preserve the old behavior when you specify sampling percentage. Note: With Oracle Database 11g, you are encouraged to use AUTO_SAMPLE_SIZE. The new evaluation mechanism fixes the following two most encountered issues in Oracle Database 10g: • The auto option stops too early and generates inaccurate statistics, and the user would specify a higher sample size than the one used by auto. • The auto optionDatabase stops too late andNew the performance bad, and the user would8specify Oracle 11g: Featuresisfor Administrators - 36 a lower sample size than the one used by auto.
Multi-columns Statistics Overview
You can play the following mini lesson to better understand multi-column statistics: Multi-column statistics Overview (see URL in notes)
ASM Fast Disk Resync Overview To better understand the following slides, you can spend some time playing the following mini lesson at: http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20CurriculumPublic/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR 1_Mini_Lessons/11gR1_Beta1_Multi_Col_Stats_JFV/11gR1_Beta1_Multi_Col_Stats_viewlet_s wf.html
Oracle Database 11g: New Features for Administrators 8 - 37
Multi-columns Statistics Overview AUTO MAKE
MODEL 1
S(MAKE Λ MODEL)=S(MAKE)xS(MODEL) select dbms_stats.create_extended_stats('jfv','auto','(make,model)') from dual;
2
exec dbms_stats.gather_table_stats('jfv','auto',method_opt=>'for all columns size 1 for columns (make,model) size 3');
Multi-column Statistics Overview With Oracle Database 10g, the query optimizer takes into account correlation between columns when computing selectivity of multiple predicates in the following limited cases: • If all the columns of a conjunctive predicate match all the columns of a concatenated index key, and the predicates are equality, then the optimizer uses the number of distinct keys (NDK) in the index for estimating selectivity, as 1/NDK. • When DYNAMIC_SAMPLING is set to level 4, the query optimizer uses dynamic sampling to estimate the selectivity of complex predicates involving several columns from the same table. However, the sample size is very small and it increases parsing time.So, the sample is likely to be statistically inaccurate and may cause more harm than good. In all other cases the optimizer assumes that the values of columns used in a complex predicate are independent from each other. It estimates the selectivity of a conjunctive predicate by multiplying the selectivity of individual predicates. This approach always results in underestimation of the selectivity. To circumvent this issue, Oracle Database 11g allows you to collect, store and use the following statistics to capture functional dependency between two or more columns, also called groups of columns: Number of distinct values, number of nulls, frequency histograms, and density. For example, consider a table AUTO where you store information about cars. Columns MAKE and MODEL are highly correlated in that MODEL determines MAKE. This is a strong dependency, and both columns should be considered by the optimizer as highly correlated. You can signal that correlation to the optimizer using the CREATE_EXTENDED_STATS function shown in the above example, and then compute the statistics for all columns including the ones Oraclegroups Database 11g: New Features for Administrators 8 - 38 for the correlated you created. Note:
Expression Statistics Overview CREATE INDEX upperidx ON AUTO(upper(MODEL))
AUTO MODEL
AUTO
ill St
MODEL R
S(upper( MODEL))=0.01
ec o
ss po
m m en
le ib
DBA_STAT_EXTENSIONS
AUTO de d
MODEL SYS_STU3FOQ$BDH0S_14NGXFJ3TQ50
select dbms_stats.create_extended_stats('jfv','auto','(upper(model))') from dual; exec dbms_stats.gather_table_stats('jfv','auto',method_opt=>'for all columns size 1 for columns (upper(model)) size 3');
Expression Statistics Overview Predicates involving expressions on columns are a big issue for the query optimizer, when computing selectivity on predicates of the form function(Column) = constant, the optimizer assumes a static selectivity value of one percent. Obviously this approach is wrong and causes the optimizer to produce suboptimal plans. The query optimizer has been extended to better handle such predicates in limited cases, where functions that preserve the data distribution characteristics of the column and thus allow the optimizer to use the columns statistics. An example of such function is TO_NUMBER. Further enhancements were made to evaluate built-in functions during query optimization to derive better selectivity using dynamic sampling. Lastly, the optimizer collect statistics on virtual columns created to support function-based indexes. However, these solutions are either limited to a certain class of functions, or work only for expressions used to created function-based indexes. Using expression statistics in Oracle Database 11g, you can use a more general solution that includes arbitrary user-defined functions and do not depend on the presence of function-based indexes. As shown in the above example, this feature relies on the virtual column infrastructure to create statistics on expressions of columns.
Oracle Database 11g: New Features for Administrators 8 - 39
Deferred Statistics Publishing Overview
You can play the following mini lesson to better understand statistic preferences and statistics publishing: Deferred Statistics Publishing Overview (see URL in notes)
ASM Fast Disk Resync Overview To better understand the following slides, you can spend some time playing the following mini lesson at: http://stcontent.oracle.com/content/dav/oracle/Libraries/ST%20Curriculum/ST%20CurriculumPublic/Courses/Oracle%20Database%2011g/Oracle%20Database%2011g%20Release%201/11gR 1_Mini_Lessons/11gR1_Beta1_Publish_Stats_JFV/11gR1_Beta1_Publish_Stats_viewlet_swf.htm l
Oracle Database 11g: New Features for Administrators 8 - 40
Deferred Statistics Publishing Overview By default, the statistics gathering operation automatically stores the new statistics in the data dictionary each time it completes the iteration for one object (table, partition, sub-partition, or index). The optimizer see them as soon as they are written to the data dictionary, and these new statistics are called current statistics. This automatic publishing can be dreadful to the DBA who is never sure of the aftermath of the new statistics, days or even weeks later. In addition, the statistics used by the optimizer can be inconsistent if, for example, table statistics are published before the statistics of its indexes, partitions or sub-partitions. To get around these potential issues in Oracle Database 11g Release 1, you can separate the gathering step from the publication step of optimizer statistics. There are two benefits from separating the two steps: • Support the statistics gathering operation as an atomic transaction: the statistics of all tables and its dependent objects (indexes, partitions, sub-partitions) in a schema will be published at the same time. This new model has two nice properties: The optimizer will always have a consistent view of the statistics, and if for some reason the gathering step fails in mid-flight, it will be able to resume from where it left off when it is restarted using the DBMS_STAT.RESUME_GATHER_STATS procedure. • Allow the DBA to validate the new statistics by running all or part of the workload using the newly gathered statistics on a test system, then, when satisfied with the test results, proceed to the publishing step to make them current in the production environment. When you specify the gather option PUBLISH to FALSE, gathered statistics are store in the private statistic tables instead of being current. These private statistics are accessible from a Database 11g: New Features for Administrators 8 - 41 number ofOracle views: {ALL|DBA|USER}_{TAB|COL|IND|TAB_HISTGRM}_PRIVATE_STATS. To test the private statistics, you basically have two options:
Deferred Statistics Publishing Example exec dbms_stats.set_table_prefs('SH','CUSTOMERS','PUBLISH','false');
Deferred Statistics Publishing Example 1) You use the SET_TABLE_PREFS procedure to set the PUBLISH option to FALSE. This prevents the next statistics gathering operation to automatically publish statistics as current. According to the first statement, this is only true for SH.CUSTOMERS table. 2) Then you gather statistics on SH.CUSTOMERS table in the private area of the dictionary. 3) Now, you can test the new set of private statistics from your session by setting the OPTIMIZER_USE_PRIVATE_STATISTICS to TRUE. 4) Which you do at step four by issuing queries against SH.CUSTOMERS. 5) If you are satisfied with the test results, you can use the PUBLISH_PRIVATE_STATS procedure to render the private statistics for SH.CUSTOMERS current. Note: To analyze the differences between the private statistics and the current ones, you could export the private statistics to your own statistics table, and then use the new DBMS_STAT.DIFF_TABLE_STATS function.
Oracle Database 11g: New Features for Administrators 8 - 42
Oracle Database 11g: New Features for Administrators 8 - 43
Summary
In this lesson, you should have learned how to: • Use ADDM to perform cluster-wide performance analysis • Setup SGA sizing initialization parameters • Setup Automatic Memory Management • Use memory advisors
After completing this lesson, you should be able to: • Implement the new partitioning methods • Employ Data Compression • Create SQL Access Advisor analysis session using Enterprise Manager • Create SQL Access Advisor analysis session using PL/SQL • Setup a SQL Access Advisor analysis to get partition recommendations
Partitioning Enhancements Partitioning is a important tool for managing large databases. Partitioning allows the DBA to employ a "divide and conquer" methodology for managing database tables, especially as those tables grow. Partitioned tables allow a database to scale for very large datasets while maintaining consistent performance, without unduly impacting administrative or hardware resources. Partitioning enables faster data access within an Oracle database. Whether a database has 10 GB or 10 TB of data, partitioning can speed up data access by orders of magnitude. With the introduction of Oracle Database 11g, the DBA will find a useful assortment of partitioning enhancements. These enhancements include: • Addition of Interval Partitioning • Addition of System Partitioning • Composite Partitioning enhancements • Addition of Virtual Column-Based Partitioning • Addition of Reference Partitioning
Oracle Database 11g: New Features for Administrators 9 - 3
Interval Partitioning
• Interval partitioning is an extension of range partitioning. • Partitions of a specified interval are created when inserted data exceeds all of the range partitions. • At least one range partition must be created. • Interval partitioning automates the creation of range partitions
Interval Partitioning Before the introduction of interval partitioning, the DBA was required to explicitly define the range of values for each partition. The problem is explicitly defining the bounds for each partition does not scale as the number of partitions grow. Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the range partitions. You must specify at least one range partition. The range partitioning key value determines the high value of the range partitions, which is called the transition point, and the database creates interval partitions for data beyond that transition point. Interval partitioning fully automates the creation of range partitions. Managing the creation of new partitions can be a cumbersome and highly repetitive task. This is especially true for predictable additions of partitions covering small ranges, such as adding new daily partitions. Interval partitioning automates this operation by creating partitions on demand. When using interval partitioning, consider the following restrictions: • You can only specify one partitioning key column, and it must be of NUMBER or DATE type. • Interval partitioning is not supported for index-organized tables. • You cannot create a domain index on an interval-partitioned table.
Oracle Database 11g: New Features for Administrators 9 - 4
Interval Partitioning Example
CREATE TABLE SH.SALES_INTERVAL PARTITION BY RANGE (time_id) INTERVAL(NUMTOYMINTERVAL(1, ‘month’)) (PARTITION P0 values less than (TO_DATE(‘1-1-2002’, ‘dd-mm-yyyy’)), PARTITION P1 values less than (TO_DATE(‘1-1-2003’, ‘dd-mm-yyyy’)), PARTITION P2 values less than (TO_DATE(‘1-7-2003’, ‘dd-mm-yyyy’)), PARTITION P3 values less than (TO_DATE(‘1-1-2004’, ‘dd-mm-yyyy’))) AS SELECT * FROM SH.SALES WHERE TIME_ID < TO_DATE(‘1-1-2004’, ‘dd-mm-yyyy’);
Interval Partitioning Example Consider the example above which illustrates the creation of an interval partitioned table. The original CREATE TABLE statement specifies 4 partitions with varying widths. This portion of the table is range partitioned. It also specifies that above the transition point of 1-Jan-2004, partitions are created with a width of one month. These partitions are interval partitioned. Partition Pi0 will automatically be created using this information when a row with a value of A corresponding to Jan 2004 is inserted into the table. The high bound of partition P3 represents a transition point. P3 and all partitions below it (P0, P1, and P2 in this example) are in the range section while all partitions above it fall into the interval section. The only argument to the INTERVAL clause is a constant of the INTERVAL type if the partitioning column is of date type and a constant of number type if the partitioning column is of number type. Currently, only partitioned tables in which the partitioning column is of DATE or NUMBER type are supported.
Oracle Database 11g: New Features for Administrators 9 - 5
Moving the Transition Point Interval partitioned table created as shown below: CREATE TABLE SALES_INTERVAL PARTITION BY RANGE (time_id) INTERVAL(NUMTOYMINTERVAL(1, ‘month’)) (PARTITION P0 VALUES LESS THAN (TO_DATE(‘1-12004’, ‘dd-mm-yyyy’))) AS SELECT * FROM SH.SALES WHERE 1 = 0;
Moving transition point using MERGE clause: ALTER TABLE SH.SALES_INTERVAL MERGE PARTITIONS P3, P4 INTO PARTITION P4;
Moving the transition point As a result of maintenance operations a partition may move from the interval section to the range section thus shifting the transition point upwards. For example if a user merges two partitions in the interval section together the width of the resulting partition is no longer the same as the interval. We have to move this partition to the range section. If this partition is the first partition in the interval section the semantics are straightforward but consider this example. A table is created as follows CREATE TABLE SALES_INTERVAL PARTITION BY RANGE (time_id) INTERVAL(NUMTOYMINTERVAL(1, ‘month’)) (PARTITION P0 VALUES LESS THAN (TO_DATE(‘1-1-2004’, ‘dd-mm-yyyy’))) AS SELECT * FROM SH.SALES WHERE 1 = 0;
Rows come in for January 2004, March 2004 and April 2004 and we create 3 corresponding partitions. Lets call them P1, P3 and P4 respectively. Then the statement below is executed: ALTER TABLE SALES_INTERVAL MERGE PARTITIONS P3, P4 INTO PARTITION P4;
The semantics for interval partitioned tables will be that after the merge the table will now have 3 partitions, P0 corresponding to values less than 1-JAN-2004, P1 corresponding to rows for January-2004 and P4 for rows in February, March and April 2004.
Oracle Database 11g: New Features for Administrators 9 - 6
System Partitioning
System Partitioning: • Enables application-controlled partitioning for selected tables • Provides the benefits of partitioning but the partitioning and data placement are controlled by the application • Does not employ partitioning keys like other partitioning methods • Does not support partition pruning in the traditional sense
System Partitioning System partitioning enables application-controlled partitioning for arbitrary tables. The database simply provides the ability to break down a table into meaningless partitions. All other aspects of partitioning are controlled by the application. System partitioning provides the well-known benefits of partitioning (scalability, availability, and manageability), but the partitioning and actual data placement are controlled by the application. The most fundamental difference between system partitioning and other methods is that system partitioning does not have any partitioning keys. Consequently the distribution or mapping of the rows to a particular partition is not implicit. Instead the user specifies the partition to which a row maps by using partition extended syntax when inserting a row. Since system partitioned tables do not have a partitioning key, the usual performance benefits of partitioned tables will not be available for system partitioned tables. Specifically, there is no support for traditional partition pruning, partition wise joins etc. Partition pruning will be achieved by accessing the same partitions in the system partitioned tables as those that were accessed in the base table. System partitioned tables provide the manageability advantages of equi-partitioning. For example, a nested table can be created as a system partitioned table that has the same number of partitions as the base table. A domain index can be backed up by a system partitioned table that has the same number of partitions as the base table. This gives the following benefits When a partition is accessed in the base table the corresponding partition can be accessed in the system partitioned table. Pruning will be based on the base table pruning. Oracle on Database NewbeFeatures 9 - 7table. E.g. if a Any DDL performed the base 11g: table can duplicatedfor on Administrators the system partitioned partition is dropped on the base table, the corresponding partition can be dropped in the system
System Partitioning Example
CREATE TABLE PARTITION BY ( PARTITION PARTITION PARTITION PARTITION );
systab (c1 integer, c2 integer) SYSTEM p1 p2 p3 p4
TABLESPACE TABLESPACE TABLESPACE TABLESPACE
tbs_1, tbs_2, tbs_3, tbs_4
Inserting into the system partitioned table: INSERT INTO systab PARTITION (p1) VALUES (4,5); /*Partition p1 */ INSERT INTO systab PARTITION (1) VALUES (4,5); /* First partition */ INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1 */ 9-8
System Partitioning Example The syntax in the example above creates a table with four partitions. Each partition can have different physical attributes. INSERT and MERGE statements must use partition extended syntax to identify a particular partition a row should go into.For example, tuple (4,5) can be inserted into any one of the above four partitions. INSERT INTO systab PARTITION (p1) VALUES (4,5); /* Partition p1 */ INSERT INTO systab PARTITION (1) VALUES (4,5); /* First partition */ INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1 */
Or: INSERT INTO systab PARTITION (p2) VALUES (4,5); /* partition p2 */ or INSERT INTO systab PARTITION (2) VALUES (4,5) /* second partition */ INSERT INTO systab PARTITION (:pno) VALUES (4,5); /* pno bound to 1/p1 */
As the examples above show, the partition extended syntax supports both numbers and bind variables. The use of bind variables is important because it allows cursor sharing of insert statements. Deletes and updates do not require the partition extended syntax. However since there is no partition pruning, if the partition extended syntax is omitted the entire table will be scanned to execute the operation. Again, this example highlights the fact that there is no implicit mapping from tuples to any partition.
Oracle Database 11g: New Features for Administrators 9 - 8
System Partitioning Guidelines
The following operations are supported for system partitioned tables: • Partition maintenance operations and other DDL operations • Creation of local indexes • Creation of local bitmapped indexes • Creation of global indexes • All DML operations • INSERT AS SELECT with partition extended syntax: INSERT INTO PARTITION (<partition-name|number|bind var) AS <subqery>
System Partitioning Guidelines The following operations are supported for system partitioned tables: • Partition maintenance operations and other DDLs (See exceptions below) • Creation of local indexes. • Creation of local bitmapped indexes • Creation of global indexes • All DML operations • INSERT AS SELECT with partition extended syntax: INSERT INTO PARTITION (<partition-name|number|bind var) AS <subqery>
Because of the peculiar requirements of system partitioning, the following operations are not supported for system partitioning: • Unique local indexes are not supported because they require a partitioning key. • CREATE TABLE AS SELECT Since there is no partitioning method, it is not possible to distribute rows to partitions. Instead the user should first create the table and then insert rows into each partition. • INSERT INTO AS <subquery> • SPLIT PARTITION operations
Oracle Database 11g: New Features for Administrators 9 - 9
Composite Partitioning Enhancements
• Range Top Level – Range- Range
RANGE, LIST, INTERVAL
• List Top Level – List-List – List-Hash – List-Range
• Interval Top Level – Interval-Range – Interval-List – Interval-Hash
Composite Partitioning Enhancements Prior to the release of Oracle Database 11g, the only composite partitioning methods supported were Range-List and Range-Hash. With this new release List partitioning can be a top level partitioning method for composite partitioned tables giving us List-List, List-Hash, List-Range and Range-Range composite methods. With the introduction of Interval partitioning, IntervalRange, Interval-List and Interval-Hash are now supported composite partitioning methods. Range-Range Partitioning Composite range-range partitioning enables logical range partitioning along two dimensions; for example, partition by order_date and range subpartition by shipping_date. List-Range Partitioning Composite list-range partitioning enables logical range subpartitioning within a given list partitioning strategy; for example, list partition by country_id and range subpartition by order_date. List-Hash Partitioning Composite list-hash partitioning enables hash subpartitioning of a list-partitioned object; for example, to enable partition-wise joins. List-List Partitioning Composite list-list partitioning enables logical list partitioning along two dimensions; for example, list partition by country_id and list subpartition by sales_channel. Oracle Database 11g: New Features for Administrators 9 - 10
Range-Range Partitioning Example CREATE TABLE sales (prod_id NUMBER(6) NOT NULL, cust_id NUMBER NOT NULL,time_id DATE NOT NULL, channel_id char(1) NOT NULL, promo_id NUMBER (6) NOT NULL,quantity_sold NUMBER(3) NOT NULL, amount_sold NUMBER(10,2) NOT NULL) PARTITION BY RANGE (time_id) SUBPARTITION BY RANGE (cust_id) SUBPARTITION TEMPLATE ( SUBPARTITION sp1 VALUES LESS THAN (50000), SUBPARTITION sp2 VALUES LESS THAN (100000), SUBPARTITION sp3 VALUES LESS THAN (150000), SUBPARTITION sp4 VALUES LESS THAN (MAXVALUE) ) ( PARTITION VALUES LESS THAN (TO_DATE('1-APR-1999','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE('1-JUL-1999','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE('1-OCT-1999','DD-MM-YYYY')), PARTITION VALUES LESS THAN (TO_DATE('1-JAN-2000','DD-MM-YYYY') );
Composite Range-Range Partitioning Composite Range-Range partitioning enables logical range partitioning along two dimensions. In the example above, the table SALES is created and range partitioned on time_id. Using a subpartition template, the SALES table is subpartitioned by range using cust_id for the subpartition key. Because of the template, all partitions will have the same number of subpartitions with the same bounds as defined by the template. If no template is specified, a single default partition bound by MAXVALUE (Range) or DEFAULT value (List) will be created. Although the example above illustrates the Range-Range methodology, the other new composite partitioning methods use similar syntax and statement structure. All of the composite partitioning methods fully support the existing partition pruning methods for queries involving predicates on the subpartitioning key.
Oracle Database 11g: New Features for Administrators 9 - 11
Virtual Column-Based Partitioning • Virtual column values are derived by the evaluation of a function or expression. • Virtual columns can be defined within a CREATE or ALTER table operation. CREATE TABLE employees (employee_id number(6) not null, … total_compensation as (salary *( 1+commission_pct))
• Virtual column values are not physically stored in the table row on disk, but are evaluated on demand. • Virtual columns can be indexed, used in queries, DML and DDL statements lake other table column types. • Tables and indexes can be partitioned on a virtual column and even statistics can be gathered upon them. 9 - 12
Virtual Column-Based Partitioning Columns of a table whose values are derived by computation of a function or an expression are known as virtual columns. These columns can be specified during a CREATE, or ALTER table operation and can be defined to be either visible or hidden. Virtual columns share the same SQL namespace as other real table columns and conform to the data type of the underlying expression that describes it. These columns can be used in queries like any other table columns providing a simple, elegant and consistent mechanism of accessing expressions in a SQL statement. The values for virtual columns are not physically stored in the table row on disk, rather they are evaluated on demand. The functions or expressions describing the virtual columns should be deterministic and pure, meaning the same set of input values should return the same output values. Virtual columns can be used like any other table columns. They can be indexed, used in queries, DML and DDL statements. Tables and indexes can be partitioned on a virtual column and even statistics can be gathered upon them. You can use virtual column partitioning to partition key columns defined on virtual columns of a table. Frequently, business requirements to logically partition objects do not match existing columns in a one-to-one manner. With the introduction of Oracle Database 11g, partitioning has been enhanced to allow a partitioning strategy defined on virtual columns, thus enabling a more comprehensive match of the business requirements.
Oracle Database 11g: New Features for Administrators 9 - 12
Virtual Column-Based Partitioning Example
CREATE TABLE employees (employee_id number(6) not null, first_name varchar2(30), last_name varchar2(40) not null, email varchar2(25), phone_numbervarchar2(20), hire_date date not null, job_id varchar2(10) not null, salary number(8,2), commission_pct number(2,2), manager_id number(6), department_id number(4), total_compensation as (salary *( 1+commission_pct)) ) PARTITION BY RANGE (total_compensation) ( PARTITION p1 VALUES LESS THAN (50000), PARTITION p2 VALUES LESS THAN (100000), PARTITION p3 VALUES LESS THAN (150000), PARTITION p4 VALUES LESS THAN (MAXVALUE) );
Virtual Column-Based Partitioning Example Consider the example in the slide above. The EMPLOYEES table is created using the standard CREATE TABLE syntax . The total_compensation column is a virtual column calculated by multiplying the value of total_compensation by the commission_pct plus one. The next statement declares total_compensation (a virtual column) to be the partitioning key of the EMPLOYEES table. Partition pruning takes place for virtual column partition keys when the predicates on the partitioning key are of the following types: • Equality or Like • List • Range • TBL$ • Partition extended names Given a join operation between two tables, the optimizer recognizes when partition-wise join (full or partial) is applicable, decides whether to use it or not and annotate the join properly when it decides to use it. This applies to both serial and parallel cases. In order to recognize full partition-wise join the optimizer relies on the definition of equipartitioning of two objects, this definition includes the equivalence of the virtual expression on which the tables were partitioned.
Oracle Database 11g: New Features for Administrators 9 - 13
Reference Partitioning
• A table can now be partitioned based on the partitioning method of a table referenced in its referential constraint • The partitioning key is resolved through an existing parent-child relationship • The partitioning key is enforced by active primary key or foreign key constraints • Tables with a parent-child relationship can be equi-partitioned by inheriting the partitioning key from the parent table without duplicating the key columns
Reference Partitioning Reference partitioning provides the ability to partition a table based on the partitioning scheme of the table referenced in its referential constraint. The partitioning key is resolved through an existing parent-child relationship, enforced by active primary key or foreign key constraints. The benefit of this is that tables with a parent-child relationship can be logically equi-partitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency also automatically cascades partition maintenance operations, making application development easier and less error-prone. To create a reference-partitioned table, you specify a PARTITION BY REFERENCE clause in the CREATE TABLE statement. This clause specifies the name of a referential constraint and this constraint becomes the partitioning referential constraint that is used as the basis for reference partitioning in the table. As with other partitioned tables, you can specify object-level default attributes, and can optionally specify partition descriptors that override the object-level defaults on a per-partition basis.
Oracle Database 11g: New Features for Administrators 9 - 14
Reference Partitioning Example CREATE TABLE orders ( order_id NUMBER(12), order_date TIMESTAMP WITH LOCAL TIME ZONE, order_mode VARCHAR2(8), customer_id NUMBER(6), order_status NUMBER(2), order_total NUMBER(8,2), sales_rep_id NUMBER(6), promotion_id NUMBER(6), CONSTRAINT orders_pk PRIMARY KEY(order_id) ) PARTITION BY RANGE(order_date) (PARTITION Q1_2005 VALUES LESS THAN (TO_DATE("01-APR-2005","DD-MON-YYYY")), PARTITION Q2_2005 VALUES LESS THAN (TO_DATE("01-JUL-2005","DD-MON-YYYY")), PARTITION Q3_2005 VALUES LESS THAN (TO_DATE("01-OCT-2005","DD-MON-YYYY")), PARTITION Q4_2005 VALUES LESS THAN (TO_DATE("01-JAN-2006","DD-MON-YYYY")) ); 9 - 15
Reference Partitioning Example The example in the slide above creates a list-partitioned table called ORDERS which is rangepartitioned on order_date. It is created with four partitions, Q1_2005, Q2_2005, Q3_2005, and Q4_2005. This table will be referenced in the creation of a reference partitioned table on the next slide.
Oracle Database 11g: New Features for Administrators 9 - 15
Reference Partitioning Example (Continued) CREATE TABLE order_items ( order_id NUMBER(12) NOT NULL, line_item_id NUMBER(3) NOT NULL, product_id NUMBER(6) NOT NULL, unit_price NUMBER(8,2), quantity NUMBER(8), CONSTRAINT order_items_fk FOREIGN KEY(order_id) REFERENCES orders(order_id) ) PARTITION BY REFERENCE(order_items_fk);
Reference Partitioning Example (continued) The reference-partitioned child table ORDER_ITEMS example above is created with four partitions, Q1_2005, Q2_2005, Q3_2005, and Q4_2005, where each partition contains the order_items rows corresponding to orders in the respective parent partition. If partition descriptors are provided, then the number of partitions described must be exactly equal to the number of partitions or subpartitions in the referenced table. If the parent table is a composite partitioned table, then the table will have one partition for each subpartition of its parent; otherwise the table will have one partition for each partition of its parent. Partition bounds cannot be specified for the partitions of a reference-partitioned table. The partitions of a reference-partitioned table can be named. If a partition is not explicitly named, then it will inherit its name from the corresponding partition in the parent table, unless this inherited name conflicts with one of the explicit names given. In this case, the partition will have a system-generated name. Partitions of a reference-partitioned table will collocate with the corresponding partition of the parent table, if no explicit tablespace is specified for the reference-partitioned table's partition.
Oracle Database 11g: New Features for Administrators 9 - 16
Compression
• Table compression is optimized for relational data. • There is virtually no negative impact on the performance of queries against compressed data. • There can be a significant positive impact on queries accessing large amounts of data. • The data is compressed by eliminating duplicate values in a database block. • All database features and functions that work on regular blocks also work on compressed blocks.
Compression The cost of disk systems can be a very large portion of building and maintaining large data warehouses. Oracle Database helps reduce this cost by compressing the data and it does so without the typical trade-offs of space savings versus access time to data. The table compression technique used is very advantageous for large data warehouses. It has virtually no negative impact on the performance of queries against compressed data; in fact, it may have a significant positive impact on queries accessing large amounts of data, as well as on data management operations such as backup and recovery. Consider that you need to retrieve less data from disk in order to satisfy a query or perform a backup, which simply reduces the amount of work that needs to be performed. The data is compressed by eliminating duplicate values in a database block. Compressed data stored in a database block is self-contained. That is, all the information needed to re-create the uncompressed data in a block is available within that block. Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table. With the exception of a symbol table at the beginning, compressed database blocks look very much like regular database blocks.
Oracle Database 11g: New Features for Administrators 9 - 17
Compression (continued) As a result of the unique compression techniques, there is no expensive decompression operation needed to access compressed table data. This means that the decision as to when to apply compression does not need to take a possible negative impact on queries into account. Compression is done as part of bulk-loading data into the database. The overhead associated with the initial compression may be an increase in CPU resources of up to 50%. This is the primary trade-off that needs to be taken into account when considering compression.
Oracle Database 11g: New Features for Administrators 9 - 18
Data Compression Levels Three levels of compression are available: • LOW – LOW uses HSC, or the native Oracle compression algorithm – Gives the best CPU performance but not the best compression ratio
• MEDIUM – Employs LZO, level 1 – Gives a better compression ratio but CPU utilization is higher
• HIGH – Uses the ZLIB level 9algorithm – Has the best compression ratio but the highest CPU utilization 9 - 19
Data Compression Specifics For compressing the user data three algorithms are available for the DBA to choose between. The three methods balance better compression against CPU usage. • HSC, or the native Oracle compression algorithm gives the best CPU performance but not the best compression ratio. • LZO, level 1, gives us a better compression ratio but not so good CPU performance. • The algorithm with the best compression ratio, ZLIB level 9, will gives the poorest CPU performance but the highest compression rate. The DBA can choose between three levels of compression: HIGH, MEDIUM and LOW depending on what is favored the most; space or CPU utilization. • LOW uses the Oracle native compression algorithm HSC. • MEDIUM employs LZO level 1 • HIGH uses ZLIB level 9. The compression works at the block level. Compressing the data can be costly from the CPU point of view but decompression is done very fast. For LZO and ZLIB we will have to decompress the data portion whenever something in that area needs to be accessed. For this, larger in-memory buffers are allocated. When we are done reading or modifying the data, the buffer is compressed again. Of course this can be costly whenever DML operations are called. If an ALTER TABLE statement is issued to turn on compression, only blocks are generated after this statement will be compressed. The user can also switch between the compression algorithms using this ALTER TABLE statement but again, only the new blocks will use the new algorithm. Oracle Database 11g: New Features for Administrators 9 - 19
SQL Access Advisor: Overview What partitions, indexes, and MVs do I need to optimize my entire workload?
SQL Access Advisor: Overview Defining appropriate access structures to optimize SQL queries has always been a concern for an Oracle DBA. As a result, there have been many papers and scripts written as well as high-end tools developed to address the matter. In addition, with the development of partitioning and materialized view technology, deciding on access structures has become even more complex. As part of the manageability improvements in Oracle Database 10g and 11g, SQL Access Advisor has been introduced to address this very critical need. SQL Access Advisor identifies and helps resolve performance problems relating to the execution of SQL statements by recommending which indexes, materialized views, materialized view logs, or partitions to create, drop, or retain. It can be run from Database Control or from the command line by using PL/SQL procedures. SQL Access Advisor takes an actual workload as input, or the Advisor can derive a hypothetical workload from the schema. It then recommends the access structures for faster execution path. It provides the following advantages: • Does not require you to have expert knowledge • Bases decision making on rules that actually reside in the cost-based optimizer • Is synchronized with the optimizer and Oracle database enhancements • Is a single advisor covering all aspects of SQL access methods • Provides simple, user-friendly GUI wizards • Generates scripts for implementation of recommendations
Oracle Database 11g: New Features for Administrators 9 - 20
SQL Access Advisor: Usage Model SQL Access Advisor takes as input a workload that can be derived from multiple sources: • SQL cache, to take current content of V$SQL • Hypothetical, to generate a likely workload from your dimensional model. This option is interesting when your system is being initially designed. • SQL Tuning Sets, from the workload repository SQL Access Advisor also provides powerful workload filters that you can use to target the tuning. For example, a user can specify that the advisor should look at only the 30 most resourceintensive statements in the workload, based on optimizer cost. For the given workload, the advisor then does the following: • Simultaneously considers index solutions, materialized view solutions, partition solutions, or combinations of all three • Considers storage for creation and maintenance costs • Does not generate drop recommendations for partial workloads • Optimizes materialized views for maximum query rewrite usage and fast refresh • Recommends materialized view logs for fast refresh • Recommends partitioning for tables, indexes, and materialized views. • Combines similar indexes into a single index • Generates recommendations that support multiple workload queries
Oracle Database 11g: New Features for Administrators 9 - 21
Possible Recommendations Recommendation
Comprehensive
Limited
Add new (partitioned) index on table or materialized view.
YES
YES
Drop an unused index.
YES
NO
Modify an existing index by changing the index type.
YES
NO
Modify an existing index by adding columns at the end.
YES
YES
Add a new (partitioned) materialized view.
YES
YES
Drop an unused materialized view (log).
YES
NO
Add a new materialized view log.
YES
YES
Modify an existing materialized view log to add new columns or clauses.
YES
YES
Partition an existing unpartitioned table or index
Possible Recommendations SQL Access Advisor carefully considers the overall impact of recommendations and makes recommendations by using only the known workload and supplied information. Two workload analysis methods are available: • Comprehensive: With this approach, SQL Access Advisor addresses all aspects of tuning partitions, materialized views, indexes, and materialized view logs. It assumes that the workload contains a complete and representative set of application SQL statements. • Limited: Unlike the comprehensive workload approach, a limited workload approach assumes that the workload contains only problematic SQL statements. Thus, advice is sought for improving the performance of a portion of an application environment. When comprehensive workload analysis is chosen, SQL Access Advisor forms a better set of global tuning adjustments, but the effect may be a longer analysis time. As shown in the table, the chosen workload approach determines the type of recommendations made by the advisor. Note: Partition recommendations can only work on tables that have at least 10,000 rows, and workloads that have some predicates and joins on columns of type NUMBER or DATE. Partitioning advises can only be generated on these types columns. In addition, partitioning advices can only be generated for single column INTERVAL and HASH partitioning. INTERVAL partitioning recommendations can be output as RANGE syntax but INTERVAL is the default. HASH partitioning is only done to leverage partition-wise joins.
Oracle Database 11g: New Features for Administrators 9 - 22
SQL Access Advisor Session: Initial Options The next few slides describes a typical SQL Access Advisor session. You can access the SQL Access Advisor wizard through the Advisor Central link on the Database Home page or through individual alerts or performance pages that may include a link to facilitate solving a performance problem. The SQL Access Advisor wizard consists of several steps during which you supply the SQL statements to tune and the types of access methods you want to use. Use the SQL Access Advisor Default Options page to select a template or task from which to populate default options before starting the wizard. You can choose Continue to start the wizard or Cancel to go back to the Advisor Central page. If you Choose View Options to view a list of the options for the specified template or task. Note: The SQL Access Advisor may be interrupted while generating recommendations allowing the results to be reviewed. For general information about using SQL Access Advisor, see the "Overview of the SQL Access Advisor" section in the "SQL Access Advisor" chapter of the Oracle Data Warehousing Guide.
Oracle Database 11g: New Features for Administrators 9 - 23
SQL Access Advisor Session: Initial Options (Continued) If you choose the Inherit Options from a Task or Template option on the Initial Options page, you are able to select an existing task, or an existing template to inherit SQL Access Advisor’s options. By default, SQLACCESS_EMTASK template is used. You can view the various options defined by a task or a template by selecting the corresponding object, and click View Options.
Oracle Database 11g: New Features for Administrators 9 - 24
SQL Access Advisor: Workload Source You can choose your workload source from three different sources: • Current and Recent SQL Activity: This source corresponds to SQL statements that are still cached in your SGA. • Use an existing SQL Tuning Set: You also have the possibility to create and use a SQL Tuning Set that holds your statements. • Hypothetical Workload: This option provides a schema that allows the advisor to search for dimension tables and produce a workload. This is very useful to initially design your schema. Using the Filter Options section, you can further filer your workload source. Filter options are: • Resource Consumption: Number of statements order by Optimizer Cost, Buffer Gets, CPU Time, Disk Reads, Elapsed Time, Executions. • Users • Tables • SQL Text • Module Ids • Actions
Oracle Database 11g: New Features for Administrators 9 - 25
SQL Access Advisor: Recommendation Options Use the Recommendations Options page to choose whether to limit the SQL Access Advisor to recommendations based on a single access method. You can choose the type of structures to be recommended by the advisor. If none of the three possible ones are chosen, the advisor evaluates existing structures instead of trying to recommend new ones. You can use the Advisor Mode section to run the advisor in one of two modes. These modes affect the quality of recommendations as well as the length of time required for processing. In Comprehensive mode, the Advisor searches a large pool of candidates resulting in recommendations of the highest quality. In Limited mode, the advisor performs quickly, limiting the candidate recommendations by working on highest cost statements only.
Oracle Database 11g: New Features for Administrators 9 - 26
SQL Access Advisor: Recommendation Options (Continued) You can choose Advanced Options to show or hide options that allow you to set space restrictions, tuning options and default storage locations. Use the Workload Categorization section to set options for workload volatility and scope. For workload volatility, you can choose to favor read-only operations or you can consider the volatility of referenced objects when forming recommendations. For workload scope, you can select Partial Workload, which will not include recommendations to drop unused access structures, or Complete Workload, which does include recommendations to drop unused access structures. Use the Space Restrictions section to specify a hard space limit, which forces the advisor to produce only recommendations with total space requirements that do not exceed the specified limit. Use the Tuning Options section to specify options that tailor the recommendations made by the advisor. The Prioritize Tuning of SQL Statements by dropdown list allows you to prioritize by Optimizer Cost, Buffer Gets, CPU Time, Disk Reads, Elapsed Time, and Execution Count. Use the Default Storage Locations section to override the defaults defined for schema and tablespace locations. By default indexes are placed in the schema and tablespace of the table they reference. Materialized views are placed in the schema and tablespace of the user who executed one of the queries that contributed to the materialized view recommendation. Note: Oracle highly recommends that you specify the default schema and tablespaces for materialized views.
Oracle Database 11g: New Features for Administrators 9 - 27
SQL Access Advisor: Schedule and Review You can then schedule and submit your new analysis by specifying various parameters to the scheduler. The possible options are shown on the above screen shots.
Oracle Database 11g: New Features for Administrators 9 - 28
SQL Access Advisor: Results From the Advisor Central page you can retrieve the task details for your analysis. By selecting the task name in the Results section of the Advisor Central page, you can get to the Results for Task Summary page from where you can see an overview of the Access Advisor findings. The page presents charts and statistics that provide overall workload performance and query execution time potential improvement for the recommendations. You can use the page to show statement counts and recommendation action counts.
Oracle Database 11g: New Features for Administrators 9 - 29
SQL Access Advisor: Results To see other aspects of the results for the Access Advisor task, choose one of the three other tabs on the page, Recommendations, SQL Statements, or Details. On the Recommendation page, you can drill down to each of the recommendations. For each of them, you can important information in the Select Recommendations for Implementation table. You can then select one or more recommendations and schedule their implementation. If you click on the ID for a particular recommendation, you are taken to the Recommendation page that displays all actions for the specified recommendation and optionally to modify the tablespace name of the statement. When you complete any changes, choose OK to apply the changes. From that page, you can view the full text of an action by choosing the link in the Action field for the specified action. You can view the SQL for all actions in the Recommendations by clicking Show SQL.
Oracle Database 11g: New Features for Administrators 9 - 30
SQL Access Advisor: Recommendation Implementation Most of these recommendations can be executed on a production system using simple SQL DDL statement. For those cases, SQL Advisor will produce executable SQL statements. In some instances, for example repartitioning existing partitioned tables, or existing dependent indexes, simple SQL is not sufficient. SQL Advisor will then generate a script calling external packages such as DBMS_REDEFINITION in order to enable the user to implement the recommended change. In the above example, the SQL Access Advisor makes the recommendation to partition table SH.CUSTOMERS on CUST_CREDIT_LIMIT column. The recommendation uses the INTERVAL partitioning scheme, and define the first range of values as been less than 1600. Interval partitions are partitions based on a numeric range or date time interval. They extend range partitioning by instructing the database to create partitions of the specified interval automatically when data inserted into the table exceeds all of the range partitions.
Oracle Database 11g: New Features for Administrators 9 - 31
SQL Access Advisor: Results To see other aspects of the results for the Access Advisor task, choose one of the three other tabs on the page, Recommendations, SQL Statements, or Details. The SQL Statements page shows you a chart and a corresponding table that list SQL statements initially ordered by the largest cost improvement. The top SQL statement will be improved the most by implementing its associated recommendation.
Oracle Database 11g: New Features for Administrators 9 - 32
SQL Access Advisor: Results To see other aspects of the results for the Access Advisor task, choose one of the three other tabs on the page, Recommendations, SQL Statements, or Details. The Details page shows you the workload and task options that were used when the task was created. This page also gives you all journal entries that were logged during the task execution.
Oracle Database 11g: New Features for Administrators 9 - 33
SQL Access Advisor: PL/SQL Procedure Flow The graphic shows the typical operational flow of the SQL Access Advisor procedures from the DBMS_ADVISOR package. You can find a complete description of each of these procedures in the Oracle Database PL/SQL Packages and Types Reference guide. • Step 1: Create and manage tasks and data. This step uses a SQL Access Advisor task. • Step 2: Prepare tasks for various operations. This step uses SQL Access Advisor parameters. • Step 3: Prepare and analyze data. This step uses SQL Tuning Sets and SQL Access Advisor tasks. With Oracle Database 11g R1, GET_TASK_REPORT can report back using HTML or XML in addition to just text. Note: The DBMS_ADVISOR.QUICK_TUNE procedure is a shortcut that performs all the necessary operations to analyze a single SQL statement. The operation creates a task for which all parameters are defaulted. The workload is constituted by the specified statement only. Finally, the task is executed and the results are saved in the repository. You can also instruct the procedure to implement the final recommendations.
Oracle Database 11g: New Features for Administrators 9 - 34
SQL Access Advisor: PL/SQL Example
BEGIN dbms_advisor.create_task(dbms_advisor.sqlaccess_advisor,'MYTASK'); END;
1
BEGIN dbms_advisor.set_task_parameter('MYTASK','ANALYSIS_SCOPE','ALL'); dbms_advisor.set_task_parameter('MYTASK','MODE','COMPREHENSIVE'); END;
2
BEGIN dbms_advisor.add_sts_ref('MYTASK','SH','MYSTS'); dbms_advisor.execute_task('MYTASK'); dbms_output.put_line(dbms_advisor.get_task_script('MYTASK')); END;
SQL Access Advisor: PL/SQL Example Matching the order shown on the previous slide, the above examples shows you a possible SQL Access Advisor tuning session using PL/SQL code. The first PL/SQL block creates a new tuning task called MYTASK. This task is uses the SQL Access Advisor. The second PL/SQL block sets SQL Access Advisor parameters for MYTASK. In the example, we set ANALYSIS_SCOPE to ALL which means to generate recommendations for indexes, materialized views and partitions. Then, we set MODE to COMPREHENSIVE to include all SQL statements that are part of the SQL Tuning Set associated to a future task. The third PL/SQL block associates a workload to MYTASK. Here, we use an existing SQL Tuning Set called MYSTS. We can now execute the tuning task. After its execution completes, you can generate corresponding recommendation scripts as shown with the third example on the slide. Note: For a complete list of SQL Access Advisor parameters (around 50), refer to the Oracle Database PL/SQL Packages and Types Reference guide.
Oracle Database 11g: New Features for Administrators 9 - 35
Summary
In this lesson, you should have learned how to: • Implement the new partitioning methods • Employ Data Compression • Create SQL Access Advisor analysis session using Enterprise Manager • Create SQL Access Advisor analysis session using PL/SQL • Setup a SQL Access Advisor analysis to get partition recommendations
Oracle Database 11g: New Features for Administrators 10 - 1
Objectives
After completing this lesson, you should be able to: • Describe Oracle Database 11g new and enhanced RMAN features • Configure archivelog deletion policies • Duplicate active databases by using the Oracle network (without backups) • Back up large files in multiple sections • Create archival backups for long-term storage • Manage recovery catalog, for example, merge multiple catalog versions
RMAN News Enhanced configuration of deletion policies Archived redo logs are eligible for deletion only when not needed by required consumers such as Data Guard, Streams, Flashback Database, and so on. When you CONFIGURE an archived log deletion policy, the configuration applies to all archiving destinations, including the flash recovery area. Both BACKUP ... DELETE INPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area. When you back up the recovery area, RMAN can fail over to other archived redo log destinations if the archived redo log in the flash recovery area is inaccessible or corrupted. Active Database Duplication You can use the "network-aware" DUPLICATE command to create a duplicate or standby database over the network without a need for pre-existing database backups. Improved block media recovery performance You can use the RECOVER command (formerly the BLOCKRECOVER command) to recover individual data blocks. If flashback logging is enabled and contains older, uncorrupted blocks, then RMAN can use these blocks, thereby speeding up block media recovery.
Oracle Database 11g: New Features for Administrators 10 - 3
RMAN News
Faster and optimized backup through: • Parallel backup and restore for very large files ** • Fast incremental backup on physical standby • Configuring backup compression
Note: The features and tasks marked with ** are discussed in more detail in this lesson. 10 - 4
RMAN News Parallel backup and restore for very large files Backups of large data files now use multiple parallel server processes to efficiently distribute the workload for each file. This features improves the performance of backups. Look for more details later in this section. Fast incremental backups on physical standby database You can enable block change tracking on a physical standby database ( use the existing ALTER DATABASE ENABLE/DISABLE BLOCK CHANGE TRACKING SQL statement). RMAN then will track changed blocks during standby managed recovery. This allows the off loading of block tracking to the standby database and allows the same fast incremental backups the use the change tracking file that have been available on the primary. This feature enables faster incremental backups on a physical standby database than in previous releases. Configuring backup compression You can use the CONFIGURE command to choose between the BZIP2 and ZLIB compression algorithms for RMAN backups. Note: For more details about the RECOVER, VALIDATE and CONFIGURE commands, see the Oracle Database Backup and Recovery Reference.
Oracle Database 11g: New Features for Administrators 10 - 4
RMAN News
Other enhancements: • Archival backups for long-term storage ** • Improved block corruption detection Recovery catalog enhancements • Merging catalogs ** • Restricting DBA backup catalog access to owned databases Integration enhancements: • Automatic Network File System (NFS) • RMAN Integration with VSS-enabled applications Note: The features and tasks marked with ** are discussed in more detail in this lesson. 10 - 5
RMAN News Archival backups for long-term storage Long-term backups, created with the KEEP option, no longer require all archived logs to be retained, when the backup is online. Instead, archive logs needed to recover the specified data files to a consistent point in time are backed up (along with specified data files and a control file). This functionality reduces archive log backup storage needed for online, long-term KEEP backups, and simplifies the command by using a single format string for all the files needed to restore and recover the backup. Improved block corruption detection Several database components and utilities, including RMAN, can now detect a corrupt block and record it in the V$DATABASE_BLOCK_CORRUPTION view. The Oracle database automatically updates this view when block corruptions are detected or repaired. The VALIDATE command is enhanced with many new options such as VALIDATE ... BLOCK and VALIDATE DATABASE.
Oracle Database 11g: New Features for Administrators 10 - 5
RMAN News Merging Catalogs The new IMPORT CATALOG command allows one catalog schema to be merged into another, either the whole schema or just the metadata for specific databases in the catalog. This simplifies catalog management for you by allowing separate catalog schemas, created in different versions, to be merged into a single catalog schema. Restricting DBA backup catalog access to owned databases The owner of a recovery catalog can grant or revoke access to a subset of the catalog to database users. This subset is called a "virtual private catalog". See the lesson Security New Features for more details. Automatic Network File System (NFS) The NFS client is implemented as part of Oracle kernel in ODM library. This improves the easeof-use and stability of accessing NAS storage systems, as well as increasing the availability across different platforms while maintaining a consistent interface. For more details see the General Database Enhancements eStudy. Integration with VSS-enabled applications The Volume Shadow Copy Service (VSS) is an infrastructure on Windows. The Oracle VSS Writer is integrated with VSS-enabled applications. So you can use VSS-enabled software and storage systems to back up and restore an Oracle database. A key benefit is the ability to make a shadow copy of an open database. You can also use the BACKUP INCREMENTAL LEVEL 1 ... FROM SCN command in RMAN to make an incremental backup of a VSS shadow copy. For more details see the Windows eStudy.
Oracle Database 11g: New Features for Administrators 10 - 6
What You Already Know Oracle Data Guard Terminology
Oracle Data Guard Terminology Oracle Data Guard is a management, monitoring, and automation software infrastructure that works with a production database and one or more standby databases to protect your data against failures, errors, and corruptions that might otherwise destroy your database. It protects critical data by providing facilities to automate the creation, management, and monitoring of the databases and other components in a Data Guard configuration. It automates the process of maintaining a copy of an Oracle production database (called a standby database) that can be used if the production database is taken offline for routine maintenance or becomes damaged. In a Data Guard configuration, a production database is referred to as a primary database. A standby database is a synchronized copy of the primary database. Using a backup copy of the primary database, you can create from one to nine standby databases. The standby databases, together with the primary database, make up a Data Guard configuration. Each standby database is associated with only one primary database. Note: You can use the Cascaded Redo Log Destinations feature to incorporate more than nine standby databases in your configuration. Configuring standby redo log files is highly recommended on all standby databases in a Data Guard configuration, including the primary database to aid in role reversal.
Oracle Database 11g: New Features for Administrators 10 - 7
Improved Archive Log Management Prior to Oracle Database 11g, when the flash recovery area is backed up, the archived logs outside flash recovery area are not considered. If an archived log has a corrupt block or is missing in the flash recovery area, then the backup job fails. In Oracle Database 11g, archive log failover uses another copy of the archived log to continue writing the backups, when an archived log is found missing or has a corrupt block. This improvement is transparent. An optional local archive log destination together with the flash recovery area and new RMAN configuration options enables you to recover from the loss of the flash recovery area. In general, the Oracle server and RMAN keep archived logs as long as possible in the flash recovery area. When space is needed, RMAN first ensures that the user-defined flashback retention time is met, before proceeding to automatically deleting the archived logs. In Oracle Database 11g, you can configure archive log deletion based on: • Shipped configuration (all archived log files are transferred to specified remote locations) • Number of backups available on a specific device type RMAN deletes archived logs outside the flash recovery area, when the BACKUP command with the DELETE INPUT option or the DELETE ARCHIVELOG command is executed.
Oracle Database 11g: New Features for Administrators 10 - 8
Configuring Archivelog Deletion Policies You can set your archive log deletion policy in Enterprise Manager for all databases. Choose from the following options: • Delete archivelogs that are backed up to tertiary device and are obsolete based on the retention policy. • Delete archivelogs after they are applied to the standby database. - The APPLIED ON STANDBY archive log deletion policy is enhanced to apply to all standby destinations instead of only mandatory remote destinations. In other words, an archived log is not deleted from the database node, until all the dependent destinations have consumed it. • Delete archivelogs after they are shipped to all standby databases.- The logs become eligible for deletion from the local archive log destination, as soon as they are shipped to all remote destinations. Note: If you specify the DELETE command with the FORCE keyword, the RMAN ignores the policy settings, deletes specified files (whether or not they exist on the media), removes repository records, and displays the number of deleted objects at the end of the job. BETA Note: The screenshot does not yet show all new options. For example, RMAN has the BACKED UP TIMES TO DEVICE TYPE <device_type> option with the log deletion configuration.
Oracle Database 11g: New Features for Administrators 10 - 9
Deleting Backed Up Files
1) 1 Configuring a deletion policy: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DEVICE TYPE sbt;
Deleting Backed Up Files 1. Assume that you have an archived redo log deletion policy as shown in step 1. 2. The DELETE ... ARCHIVELOG command deletes all archived logs that meet the requirements of the configured deletion policy, which specifies that they must be backed up twice to tape. The DELETE INPUT and DELETE OBSOLETE commands work in the same way. 3. The third example assumes that you have two archiving destinations set: /arch1 and /arch2. The command backs up one archived redo log for each unique sequence number. For example, if archived redo log 1000 is in both directories, RMAN only backs up one copy of this log. The DELETE INPUT clause with the ALL keyword specifies that RMAN should delete all archived redo logs from both archiving directories after the backup.With the configuration in step 1 the DELETE INPUT clause will not delete a archived redo log until it has been backed up twice to tape.
Oracle Database 11g: New Features for Administrators 10 - 10
Duplicating a Database
• With network (no backups required) • Including customized SPFILE • Via Enterprise Manager or RMAN command line
Duplicating a Database Prior to Oracle Database 11g, you could create a duplicate database with RMAN for testing or for standby. It required the source database, a backup copy on the source or on tape, a copy of the backup on the destination system, and the destination database itself. Oracle Database 11g greatly simplifies this process. You can instruct the source database to do online image copies and archived log copies directly to the auxiliary instance, by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command. The database files are coming from a TARGET or source database. They are copied via an interinstance network connection to a destination or AUXILIARY instance. RMAN then uses a “memory script” (one that is contained only in memory) to complete recovery and open the database.
Oracle Database 11g: New Features for Administrators 10 - 11
Active Database Duplication Usage Notes for Active Database Duplication: • Oracle Net must be aware of the source and destination databases. The FROM ACTIVE DATABASE clause implies network action. • If the source database is open, it must have archive logging enabled. • If the source database is in mounted state (and not a standby), the source database must have been shutdown cleanly. • Availability of the source database is not be affected by active database duplication. But the source database instance provides CPU cycles and network bandwidth. Enterprise Manager Interface In Enterprise Manager select Data Movement > Clone Database.
Oracle Database 11g: New Features for Administrators 10 - 12
Active Database Duplication Usage Notes for Active Database Duplication Password files are copied to the destination. The destination must have the same SYS user password as the source. In other words, at the beginning of the active database duplication process, both databases (source and destination) must have password files. When duplicating a standby database, the password file from the primary database overwrites the current (temporary) password file on the standby database. When you use command line and do not duplicate for a standby database, then you need to use the PASSWORD clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command).
Oracle Database 11g: New Features for Administrators 10 - 13
Customize Destination Options Prior to Oracle Database 11g, the SPFILE parameter file was not copied, because it requires alterations appropriate for the destination environment. You had to copy the SPFILE into the new location, edit it, and specify it when starting the instance in NOMOUNT mode or on the RMAN command line to be used before opening the newly copied database. With Oracle Database 11g you provide your list of parameters and desired values and the system sets them. The most obvious parameters are those whose value contains a directory specification. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are placed. Note the case-sensitivity of parameters: The case must match for PARAMETER_VALUE_CONVERT. With the FILE_NAME_CONVERT parameters, pattern matching is OS specific. This functionality is equivalent to pausing the database duplication after restoring the SPFILE and issuing ALTER SYSTEM SET commands to modify the parameter file (before the instance is mounted). The example shows how to clone a database on the same host and in the same Oracle Home, with the use of different top-level disk locations. The source directories are under u01, the destination under u31. - You need to confirm your choice.
Oracle Database 11g: New Features for Administrators 10 - 15
Database Duplication: Job Run The example of the Job Run page shows the following steps: 1. Source Preparation 2. Create Control File 3. Destination Directories Creation 4. Copy Initialization and Password Files * Skip Copy or Transfer Controlfile 5. Destination Preparation 6. Duplicate Database * Skip Crating Standby ControlfIle * Skip Switching Clone Type 7. Recover Database 8. Add Temporary Files 9. Add EM Target 10. Cleanup Source Temporary Directory
Oracle Database 11g: New Features for Administrators 10 - 19
The RMAN DUPLICATE Command
DUPLICATE TARGET DATABASE TO aux FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u31' SET SGA_MAX_SIZE = '200M' SET SGA_TARGET = '125M' SET LOG_FILE_NAME_CONVERT = '/u01','/u31' DB_FILE_NAME_CONVERT '/u01','/u31';
The RMAN DUPLICATE Command The example assumes you have previously connected to both the source or TARGET and the destination or AUXILIARY instance, which have a common directory structure but different top level disks. The destination instance uses automatically configured channels. • This RMAN DUPLICATE command duplicates an open database. • The FROM ACTIVE DATABASE clause indicates, that you are not using backups (it implies network action), and that the target is either open or mounted. • The SPFILE clause indicates that the SPFILE will be restored and modified prior to opening the database. • The repeating SET clause essentially issues an ALTER SYSTEM SET param = value SCOPE=SPFILE command. You can provide as many of these as necessary. Prerequistes: The AUXILIARY instance • Is at the nomount state having been started with a minimal pfile. • The pfile requires only DB_NAME and REMOTE_LOGIN_PASSWORFILE parameters. • The password file must exist and have the same SYS user password as the target. • The directory structure must be in place with the proper permission. • Connect to AUXILIARY using net service name as the SYS user.
Oracle Database 11g: New Features for Administrators 10 - 20
Duplicating a Standby Database
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u31' SET "DB_UNIQUE_NAME"="FOO" SET SGA_MAX_SIZE = "200M" SET SGA_TARGET = "125M" SET LOG_FILE_NAME_CONVERT = '/u01','/u31' DB_FILE_NAME_CONVERT '/u01','/u31';
Duplicating a Standby Database The example assumes that you are connected to the target and auxiliary instances and that the two environments have the same disk and directory structure. The FOR STANDBY clause initiates the creation of a standby database without using backups. The example uses "u01" as the disk of the source and "u31" as the top-level destination directory. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are replaced in the SPFILE.
Oracle Database 11g: New Features for Administrators 10 - 21
RMAN Multi-Section Backups Overview
Multi-Section backups: • Created by RMAN • With your specified size value • Processed independently (serial or in parallel) • Producing multi-piece backup sets
RMAN Multi-Section Backups Overview Oracle data files can be up to 128TB in size. In prior versions, the smallest unit of RMAN backup is an entire file. This is not practical with such large files. In Oracle Database 11g, RMAN can break up large files into sections and backup and restore these sections independently, if you specify the SECTION SIZE option. Each file section is a contiguous range of blocks in a file. Each file section can be processed independently, either serially or in parallel. Backing up a file in separate sections can improve both the performance and allows large file backups to be restarted. A multi-section backup job produces a multi-piece backup set. Each piece contains one section of the file. All sections of a multi-section backup, except perhaps for the last section, are the same size. There are a maximum of 256 sections per file. Tip: You should not apply large values of parallelism to backup a large file that resides on a small number of disks. This feature is built into RMAN. No installation is required beyond the normal installation of the Oracle Database 11g. COMPATIBLE must be set to at least 11.0, because earlier releases cannot restore multi-section backups. In Enterprise Manager select Availability > Backup Settings > Backup Set (tabbed page).
Oracle Database 11g: New Features for Administrators 10 - 22
Using RMAN Multi-Section Backups
BACKUP and VALIDATE DATAFILE command option: SECTION SIZE [M | K | G]
Using RMAN Multi-Section Backups The BACKUP and VALIDATE DATAFILE commands accept a new option: SECTION SIZE [M | K | G]. Specify your planned size for each backup section. The option is both a backup-command and backup-spec level option, so that you can apply different section sizes to different files in the same backup job. Viewing meta-data about your multi-section backup • The V$BACKUP_SET and RC_BACKUP_SET views have a MULTI_SECTION column, that indicates whether this is a multi-section backup or not. • The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views have a SECTION_SIZE column, that specifies the number of blocks in each section of a multisection backup. Zero means a whole-file backups.
Oracle Database 11g: New Features for Administrators 10 - 23
Creating Archival Backups with EM If you have business requirements to keep records for a long time, you can use RMAN to create an self contained archival backup of the database or tablespaces. RMAN does not apply the regular retention policies to this backup. Place your archival backup in a different long-term storage area, other than in the flash recovery area. To keep a backup for a long time, perform the following steps in Enterprise Manager: 1. Select Availability > Schedule Backup > Schedule Customized Backup. 2. Follow the steps of the Schedule Customized Backup wizard until you are on the Settings page. 3. Click Override Current Settings > Policy. In the Override Retention Policy section you can select to keep a backup for a specified number of days. A restore point is generated based on the backup job name. RMAN Syntax: KEEP {FOREVER|UNTIL TIME 'SYSDATE + n'} RESTORE POINT
Backups created with the KEEP option includes the spfile, control files, and archive redo log files required to restore this backup. This backup is a snapshot of the database at a point in time, and can be used to restore the database to another host.
Oracle Database 11g: New Features for Administrators 10 - 24
Creating Archival Backups with RMAN
Specifying the KEEP clause, when the database is online includes both data file and archive log backup sets KEEP {FOREVER | UNTIL TIME [=] ' date_string '} NOKEEP [RESTORE POINT rsname]
Creating Archival Backups with RMAN Prior to Oracle Database 11g, if you needed to preserve an online backup for a specified amount of time, RMAN assumed you might want to perform point in time recovery for any time within that period and RMAN retained all the archived logs for that time period unless you specified NOLOGS. However, you may have a requirement to simply keep the backup (and what is necessary to keep it consistent and recoverable) for a specified amount of time, for example, for two years. With Oracle Database 11g you can use the KEEP option to generate archival database backups, that satisfy business or legal requirements. The KEEP option is an attribute of the backup set (not individual of the backup piece) or copy. The KEEP option overrides any configured retention policy for this backup. You can retain archival backups, so that they are considered obsolete after a specified time (KEEP UNTIL) or never (KEEP FOREVER). The KEEP FOREVER clause requires the use of a recovery catalog. The RESTORE POINT clause creates a restore point in the control file that assigns a name to a specific SCN that can be restored from this backup. RMAN includes the data files, archived log files (only those needed to recover an online backup), and the relevant autobackup files. All these files must go to the same media family (or group of tapes) and have the same KEEP attributes.
Oracle Database 11g: New Features for Administrators 10 - 25
Managing Archival Database Backups
1) 1 Archiving a database backup: CONNECT TARGET / CONNECT CATALOG rman/rman@catdb CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER;
2 2) Changing the status of a database copy: CHANGE COPY OF DATABASE CONTROLFILE NOKEEP;
Managing Archival Database Backups The CHANGE command changes the exemption status of a backup or copy in relation to the configured retention policy. For example, you can specify CHANGE ... NOKEEP, to make a backup that is currently exempt from the retention policy eligible for the OBSOLETE status. The first example changes a consistent backup into an archival backup, which you plan to store offsite. Because the database is consistent and therefore requires no recovery, you do not need to save archived redo logs with the backup. The second example specifies that any long-term image copies of data files and control files should lose their exempt status and so become eligible to be obsolete according to the existing retention policy: Deprecated clauses: KEEP [LOGS | NOLOGS] Preferred syntax: KEEP RESTORE POINT Note: The RESTORE POINT option is not valid with CHANGE. You cannot use CHANGE ... UNAVAILABLE or KEEP for files stored in the flash recovery area.
Oracle Database 11g: New Features for Administrators 10 - 26
Managing Recovery Catalogs
Managing recovery catalogs: 1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. 3. If desired, merge recovery catalogs. NEW 4. If needed, catalog any older backups. 5. If needed, create virtual recovery catalogs for specific users. NEW 6. Protect the recovery catalog.
Managing Recovery Catalogs 1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. This step enables RMAN to store metadata for the target databases in the recovery catalog. 3. If desired, you can also use the IMPORT CATALOG command to merge recovery catalogs. 4. If needed, catalog any older backups, whose records are no longer stored in the target control file. 5. If needed, create virtual recovery catalogs for specific users and determine the metadata to which they are permitted access. For more details, see the lesson Security New Features for details. 6. Protect the recovery catalog by including it in your backup and recovery strategy. The recovery catalog contains metadata about RMAN operations for each registered target database. The catalog includes the following types of metadata: • Datafile and archived redo log backup sets and backup pieces • Datafile copies • Archived redo logs and their copies • Tablespaces and data files on the target database • Stored scripts, which are named user-created sequences of RMAN commands • Persistent RMAN configuration settings
Oracle Database 11g: New Features for Administrators 10 - 27
Managing Recovery Catalogs (continued) The enrolling of a target database in a recovery catalog for RMAN use is called registration. The recommended practice is to register all of your target databases in a single recovery catalog. For example, you can register the prod1, prod2, and prod3 databases in a single catalog owned by the catowner schema in the catdb database. The owner of a centralized recovery catalog, which is also called the base recovery catalog, can grant or revoke restricted access to the catalog to other database users. All metadata is stored in the base catalog schema. Each restricted user has full read-write access to his own metadata, which is called a virtual private catalog. The recovery catalog obtains crucial RMAN metadata from the control file of each registered target database. The resynchronization of the recovery catalog ensures that the metadata that RMAN obtains from the control files is current. You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands. The script is stored in the recovery catalog rather than on the file system. A local stored script is associated with the target database to which RMAN is connected when the script is created, and can only be executed when you are connected to this target database. A global stored script can be run against any database registered in the recovery catalog. You can use a recovery catalog in an environment in which you use or have used different versions of the database. As a result, your environment can have different versions of the RMAN client, recovery catalog database, recovery catalog schema, and target database. You can now merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog for ease of management.
Oracle Database 11g: New Features for Administrators 10 - 28
The IMPORT CATALOG Command With the IMPORT CATALOG command you import the metadata from one recovery catalog schema into a different catalog schema. If you created catalog schemas of different versions to store metadata for multiple target databases, then this command enables you to maintain a single catalog schema for all databases. 1. RMAN must be connected to the destination recovery catalog, for example the cat111 schema, which is the catalog into which you want to import catalog data. This is the first step in all examples above. IMPORT CATALOG [DB_ID = [, ,…]] [DB_NAME=[, is the source recovery catalog connect string. The version of the source recovery catalog schema must be equal to the current version of the RMAN executable. If needed, upgrade the source catalog to the current RMAN version. DB_ID: Your can specify the list of database id whose metadata should be imported from the source catalog schema. When not specified, RMAN merges metadata for all database IDs from the source catalog schema into the destination catalog schema. RMAN issues an error if the database whose metadata is merged is already registered in the recovery catalog schema.
Oracle Database 11g: New Features for Administrators 10 - 30
The IMPORT CATALOG Command (continued) DB_NAME: You can specify the list of database names whose metadata should be imported. If the database name is ambiguous RMAN issues an error. NO UNREGISTER: By default, the imported database ids are unregistered from the source recovery catalog schema after a successful import. By using the NO UNREGISTER option, you can force RMAN to keep the imported database ids in the source catalog schema. Import Examples continued: 2. In this example, the cat102 user owns an RMAN catalog (version 10.2) in the srcdb database. You want RMAN to import all registered databases and unregister them in the source catalog. 3. The cat92 user owns an RMAN catalog (version 9.2) in the srcdb database. You want RMAN to import the databases with the DBID 1423241 and 1423242, and unregister them in the source catalog. 4. The srcdb database contains three different recovery catalogs. RMAN imports metadata for all database ids (registered in these catalogs) into the cat111 schema in the destdb database. All imported target databases are unregistered from their source catalogs except for the databases registered in the cat92 schema. Additional usage details: • Ensure that no target database is registered in both the source catalog schema and destination catalog schema. If a target database is registered in both schemas, then unregister this database from source catalog and retry the import. • If the operation fails in the middle of the import, then the import is rolled back. There is never a state of partial import. • When stored scripts in the source and destination catalog schemas have name conflicts, RMAN renames the stored script of the source catalog schema.
Oracle Database 11g: New Features for Administrators 10 - 31
Summary
In this lesson, you should have learned how to: • Describe Oracle Database 11g new and enhanced RMAN features • Configure archivelog deletion policies • Duplicate active databases by using the Oracle network (without backups) • Back up large files in multiple sections • Create archival backups for long-term storage • Manage recovery catalog, for example, merge multiple catalog versions
Oracle Database 11g: New Features for Administrators 11 - 1
Objectives
After completing this lesson, you should be able to: • Describe transactions and undo • Describe undo backup optimization • Prepare your database for flashback • Create, change and drop a flashback data archive • View flashback data archive metadata • Setup flashback transaction prerequisites • Query transactions with and without dependencies • Choose back-out options and flash back transactions • Using EM LogMiner • Review transaction details 11 - 2
Oracle Database 11g: New Features for Administrators 11 - 2
Using Flashback and LogMiner
New and enhanced features in the Oracle Database 11g: • Optimized undo backup • Flashback Data Archive • Flashback Transaction or Job Backout • Browser-Based Enterprise Manager Integrated Interface for LogMiner
Using Flashback and LogMiner Transactions produce undo data. In Oracle Database 11g, undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. Flashback Data Archive provides the ability to automatically track and store all transactional changes to a record for the duration of its lifetime. You no longer need to build this intelligence into the application. This feature also provides seamless access to historical data with "as of" queries. You can use Flashback Data Archive for compliance reporting, audit reports, data analysis and decision support. Oracle Database 11g allows you to flashback selected transactions and all the dependent transactions. This recovery operation utilizes undo data to create and execute the corresponding compensating transactions that revert the affected data back to its original state. Flashback Transaction increases availability during logical recovery by easily and quickly backing out a specific transaction or set of transactions, and their dependent transactions, with one command, while the database remains online. Enterprise Manager Database Control now has an interface for LogMiner. In prior releases, administrators were required to install and use the standalone Java Console for LogMiner. With this new interface, administrators have a task-based, intuitive approach to using LogMiner.
Oracle Database 11g: New Features for Administrators 11 - 3
What You Already Know and What Is New Transactions and Undo
Transactions and Undo When a transaction starts, it is assigned to an undo segment. Throughout the life of the transaction, when data is changed, the original "old" values are copied into the undo segment. You can see which transactions are assigned to which undo segments by checking the V$TRANSACTION dynamic performance view. Undo segments are specialized segments that are automatically created by the instance as needed to support transactions. Like all segments, undo segments are made up of extents, which, in turn, consist of data blocks. Undo segments automatically grow and shrink as needed, acting as a circular storage buffer for their assigned transactions. Transactions fill extents in their undo segments until a transaction is completed or all space is consumed. If an extent fills up and more space is needed, the transaction acquires that space from the next extent in the segment. After all extents have been consumed, the transaction either wraps around back into the first extent or requests a new extent to be allocated to the undo segment. Note: Parallel DML operations can actually cause a transaction to use more than one undo segment. To learn more about parallel DML execution, see the Oracle Database Administrator’s Guide.
Oracle Database 11g: New Features for Administrators 11 - 4
What You Already Know and What Is New Guaranteeing Undo Retention
SELECT statements running 15 minutes or less are always satisfied.
11 - 5
If a transaction generates more undo than there is space in the undo tablespace, it will fail.
Guaranteeing Undo Retention The default undo behavior is to overwrite committed transactions that have not yet expired rather than to allow an active transaction to fail because of lack of undo space. (So in case of conflict, transactions have precedence over queries.) This behavior can be changed by guaranteeing retention. With guaranteed retention, undo retention settings are enforced even if they cause transactions to fail. (So in case of conflict, queries have precedence over transactions.) RETENTION GUARANTEE is a tablespace attribute rather than an initialization parameter. This attribute can be changed only with SQL command-line statements. The syntax to change an undo tablespace to guarantee retention is: SQL> ALTER TABLESPACE undotbs1 RETENTION GUARANTEE;
To return a guaranteed undo tablespace to its normal setting, use the following command: SQL> ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE;
Backup Optimization Prior to Oracle Database 11g, RMAN has two ways of eliminating blocks from the backup piece, which only applies to full backups: • Null block compression: Never used blocks are not backed up. • Unused block compression: Currently not used blocks are not backed up.
Oracle Database 11g: New Features for Administrators 11 - 5
Guaranteeing Undo Retention (Continued) In Oracle Database 11g, undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. The benefit is reduced overall backup time and storage by not backing up undo that applies to committed transactions. This optimization is automatically enabled.
Oracle Database 11g: New Features for Administrators 11 - 6
Preparing Your Database for Flashback To enable flashback features for an application, you must perform these tasks: • Create an undo tablespace with enough space to keep the required data for flashback operations. The more often users update the data, the more space is required. The database administrator usually calculates the space requirement. If you are uncertain about your space requirements, you can start with an automatically extensible undo tablespace, observe it through one business cycle (for example, 1 or 2 days), collect undo block information with the V$UNDO_STAT view, calculate your space requirements, and use them to create an appropriately sized fixed undo tablespace. (The calculation formula is in the Oracle Database Administrator's Guide.) • By default, Automatic Undo Management is enabled. If needed, enable Automatic Undo Management, as explained in the Oracle Database Administrator's Guide. • For a fixed-size undo tablespace, the Oracle database automatically tunes the system to give the undo tablespace the best possible undo retention. • For an automatically extensible undo tablespace (default), the Oracle database retains undo data to satisfy at a minimum, the retention periods needed by the longest running query and the threshold of undo retention, specified by the UNDO_RETENTION parameter.
Oracle Database 11g: New Features for Administrators 11 - 7
Preparing Your Database for Flashback (Continued) You can query V$UNDOSTAT.TUNED_UNDORETENTION to determine the amount of time for which undo is retained for the current undo tablespace. Setting the UNDO_RETENTION parameter does not guarantee, that unexpired undo data is not overwritten. If the system needs more space, the Oracle database can overwrite unexpired undo with more recently generated undo data. • Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that unexpired undo data is not discarded. • Grant flashback privileges to users, roles, or applications that need to use flashback features. To satisfy long retention requirements, create a flashback data archive.
Oracle Database 11g: New Features for Administrators 11 - 8
Flashback Data Archive
Original data in buffer cache
Undo data
DML operations Background process collects and writes original data to a flashback data archive
Flashback Data Archive Flashback data archives allow you to automatically track and archive the data in tables enabled for flashback data archive. This ensures that flashback queries obtain SQL-level access to the versions of database objects without getting a snapshot-too-old error. A flashback data archive provides the ability to track and store all transactional changes to a "tracked" table over its lifetime. It is no longer necessary to build this intelligence into your application. Flashback data archives are useful for compliance, audit reports, data analysis and decision support systems. The flashback data archive background process starts with the database. A flashback data archive consists of one or more tablespaces or parts thereof. You can have multiple flashback data archives. They are configured with retention duration. Based on your retention duration requirements, you should create different flashback data archives, for example, one for all records that must be kept for two years, another for all records that must be kept for five years. The database will automatically purge all historical information on the day after the retention period expires.
Oracle Database 11g: New Features for Administrators 11 - 9
Flashback Data Archives Process
1. 2. 3. 4.
11 - 10
Create the Flashback Data Archive Specify the default Flashback Data Archive Enable the Flashback Data Archive View Flashback Data Archive data
Flashback Data Archive Process The first step is to create a Flashback Data Archive. A Flashback Data Archive consists of one or more tablespaces. You can have multiple Flashback Data Archives. Second, you can specify a default Flashback Data Archive for the system. A Flashback Data Archive is configured with retention time. Data archived in the Flashback Data Archive is retained for the retention time. Third, you can enable flashback archiving (and then disable it again) for a table. While flashback archiving is enabled for a table, some DDL statements are not allowed on that table. By default, flashback archiving is off for any table. Last, you can examine the Flashback Data Archives. There are static data dictionary views that you can query for information about Flashback Data Archives.
Oracle Database 11g: New Features for Administrators 11 - 10
Flashback Data Archive Scenario
Using flashback data archive to access historical data: -- create the Flashback Data Archive CREATE FLASHBACK ARCHIVE DEFAULT fla1 TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;
1
-- Specify the default Flashback Data Archive ALTER FLASHBACK ARCHIVE fla1 SET DEFAULT;
2
-- Enable Flashback Data Archive ALTER TABLE inventory FLASHBACK ARCHIVE; ALTER TABLE stock_data FLASHBACK ARCHIVE;
3
SELECT product_number, product_name, count FROM inventory AS OF TIMESTAMP TO_TIMESTAMP ('2007-01-01 00:00:00', 'YYYY-MMDD HH24:MI:SS');
Flashback Data Archive Scenario You create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE statement. • You can optionally specify the default Flashback Data Archive for the system.If you omit this option, you can still make this Flashback Data Archive the default later. • You need to provide the name of the Flashback Data Archive • You need to provide the name of the first tablespace of the Flashback Data Archive • You can identify the maximum amount of space that the Flashback Data Archive can use in the tablespace. The default is unlimited. Unless your space quota on the first tablespace is unlimited, you must specify this value, or else an ORA-55621 will ensue. • You need to provide the retention time (number of days that Flashback Data Archive data for the table is guaranteed to be stored) In the example shown above in step 1, a default Flashback Data Archive named fla1 is created that uses up to 10 G of tablespace tbs1, whose data will be retained for five years. In the second step shown above, the default Flashback Data Archive is specified. By default, the system has no Flashback Data Archive. You can set it in one of two ways: 1. Specify the name of an existing Flashback Data Archive in the SET DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement. 2. Include DEFAULT in the CREATE FLASHBACK ARCHIVE statement when you create a Flashback Data Archive. In the third step shown in the previous slide, Flashback Data Archive is enabled. If Automatic Undo Management is disabled, you receive the error ORA-55614 if you try to modify the table. Oracle Database 11g: New Features for Administrators 11 - 11
Flashback Data Archive Scenario (continued) To enable flashback archiving for a table, include the FLASHBACK ARCHIVE clause in either the CREATE TABLE or ALTER TABLE statement. In the FLASHBACK ARCHIVE clause, you can specify the Flashback Data Archive where the historical data for the table will be stored. The default is the default Flashback Data Archive for the system. To disable flashback archiving for a table, specify NO FLASHBACK ARCHIVE in the ALTER TABLE statement. The last statement shown in the previous slide shows how to retrieve the inventory of all items at the beginning of the year 2007. Continuing the previous examples: • Example 4 adds up to 5 GB of TBS3 tablespace to the FLA1 flashback data archive. • Example 5 changes the retention time for the FLA1 flashback data archive to two years. • Example 6 purges all historical data older than one day from the FLA1 flashback data archive. Normally purging is done automatically, on the day after your retention time expires. You can also override this for ad-hoc clean-up. • Example 7 drops the FLA1 flashback data archive and historical data, but not its tablespaces. With the ALTER FLASHBACK ARCHIVE command, you can: • Change the retention time of a flashback data archive • Purge some or all of its data • Add, modify, and remove tablespaces Note: Removing all tablespaces of a flashback data archive causes an error. Oracle Database 11g: New Features for Administrators 11 - 12
Flashback Data Archive
Some examples for which you may wish to use flashback data archive: • To access historical data • To generate reports • For Information Lifecycle Management (ILM) • For auditing • To recover data • To enforce digital shredding
Viewing Flashback Data Archives You can use the dynamic data dictionary views, to view tracked tables and flashback data archive metadata. To access the USER_FLASHBACK views, you need table ownership privileges. For the others, you need SYSDBA privileges.
Oracle Database 11g: New Features for Administrators 11 - 14
Flashback Data Archive DDL Restrictions
Using any of the following DDL statements on a table enabled for Flashback Data Archive causes the error ORA55610: • ALTER TABLE statement that does any of the following: – – – –
Drops, renames, or modifies a column Performs partition or subpartition operations Converts a LONG column to a LOB column Includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause
Guidelines • You can use the DBMS_FLASHBACK.ENABLE and DBMS_FLASHBACK.DISABLE procedures to enable and disable the Flashback Data Archives. • Use Flashback Query, Flashback Version Query, or Flashback Transaction Query for SQL code that you write, for convenience. • To obtain an SCN to use later with a flashback feature, you can use the DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function. • To compute or retrieve a past time to use in a query, use a function return value as a timestamp or SCN argument. For example, add or subtract an INTERVAL value to the value of the SYSTIMESTAMP function. • To ensure database consistency, always perform a COMMIT or ROLLBACK operation before querying past data. • Remember that all flashback processing uses the current session settings, such as national language and character set, not the settings that were in effect at the time being queried. • To query past data at a precise time, use an SCN. If you use a timestamp, the actual time queried might be up to 3 seconds earlier than the time you specify. Oracle Database uses SCNs internally and maps them to timestamps at a granularity of 3 seconds. • You cannot retrieve past data from a dynamic performance (V$) view. A query on such a view always returns current data. However, you can perform queries on past data in static data dictionary views, such as *_TABLES.
Oracle Database 11g: New Features for Administrators 11 - 15
Flashback Transaction
• • • • •
Setting-up flashback transaction prerequisites Stepping through a possible workflow Using the Flashback Transaction Wizard Query transactions with and without dependencies Choose back-out options and flashing back transactions • Reviewing the results
Prerequisites In order to use this functionality, supplemental logging must be enabled and the correct privileges established. For example, the HR user in the HR schema decides to use Flashback Transaction for the REGIONS table. The SYSDBA performs the following setup steps in SQL*Plus: alter alter grant grant
database add supplemental database add supplemental execute on dbms_flashback select any transaction to
log data; log data (primary key) columns; to hr; hr;
Oracle Database 11g: New Features for Administrators 11 - 17
Flashing Back a Transaction
• You can flash back a transaction with Enterprise Manager or the command line. • EM uses the Flashback Transaction wizard which calls the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option. • If the PL/SQL call finishes successfully, it means that the transaction does not have any dependencies and a single transaction is backed out successfully.
Flashing Back a Transaction Security privileges To flash back or back-out a transaction, that is, to create a compensating transaction, you must have the SELECT, FLASHBACK and DML privileges on all affected tables. Conditions of Use • Transaction back-out is not support across conflicting DDL. • Transaction Backout inherits data type support from LogMiner. See the Oracle Database 11g documentation for supported data types. Recommendation • When you discover the need for transaction back-out, performance is better, if you start the back-out operation sooner. Large redo logs and high transaction rates result in slower transaction back-out operations. • Provide a transaction name for the back-out operation to facilitate later auditing. If you do not provide a transaction name, it will be automatically generated for you.
Oracle Database 11g: New Features for Administrators 11 - 18
Possible Workflow
• Viewing data in a table • Discovering a logical problem • Using Flashback Transaction – – – –
Performing a query Selecting a transaction Flashing back a transaction (with no conflicts) Choosing other back-out options (if conflicts exists)
Possible Workflow Assume that several transactions occurred as indicated below: connect hr/hr INSERT INTO hr.regions VALUES (5,'Pole'); COMMIT; UPDATE hr.regions SET region_name='Poles' WHERE region_id = 5; UPDATE hr.regions SET region_name='North and South Poles' WHERE region_id = 5; COMMIT; INSERT INTO hr.countries VALUES ('TT','Test Country',5); COMMIT; connect sys/<password> as sysdba ALTER SYSTEM ARCHIVE LOG CURRENT;
Oracle Database 11g: New Features for Administrators 11 - 19
Viewing Data To view the data in a table in Enterprise Manager, select Schema > Tables. While viewing the content of the HR.REGIONS table, you discover a logical problem. Region 5 is misnamed. You decide to immediately address this issue.
Oracle Database 11g: New Features for Administrators 11 - 20
Flashback Transaction Wizard In Enterprise Manager, select Schema > Tables > HR.REGIONS, then "Flashback Transaction" on the Actions drop-down, and click Go. This invokes the Flashback Transaction Wizard for your selected table. The Flashback Transaction: Perform Query page is displayed. Select the appropriate time range and add query parameters. (The more specific you can be, the shorter is the search of the Flashback Transaction Wizard.) In Enterprise Manager, Flashback Transaction and LogMiner are seemlessly integrated (as this page demonstrates). Without Enterprise Manager, use the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure, which is described in the PL/SQL Packages and Types Reference. Essentially, you take an array of transaction ids as starting point of your dependency search. For example: CREATE TYPE XID_ARRAY AS VARRAY(100) OF RAW(8); CREATE OR REPLACE PROCEDURE TRANSACTION_BACKOUT( numberOfXIDs NUMBER, -- number of transactions passed as input xids XID_ARRAY, -- the list of transaction ids options NUMBER default NOCASCADE, -- back out dependent txn timeHint TIMESTAMP default MINTIME -- time hint on the txn start );
Oracle Database 11g: New Features for Administrators 11 - 21
Flashback Transaction Wizard (Continued) The Flashback Transaction: Select Transaction page displays the transactions according to your previously entered specifications. First, display the transaction details to confirm that you are flashing back the correct transaction. Then select the offending transaction and continue with the wizard.
Oracle Database 11g: New Features for Administrators 11 - 22
Flashback Transaction Wizard (Continued) The Flashback Transaction Wizard now generates the Undo script and flashes back the transaction, but it gives you control to COMMIT this flashback. Before you commit the transaction, you can use the Execute SQL area on the bottom of the Flashback Transaction: Result page, to view what the result of your COMMIT will be.
Oracle Database 11g: New Features for Administrators 11 - 23
Finishing Up On the Flashback Transaction: Review page click the "Show Undo SQL Script" button to view the compensating SQL commands. Click Finish to commit your compensating transaction.
Oracle Database 11g: New Features for Administrators 11 - 24
Choosing Other Back-out Options The TRANSACTION_BACKOUT procedure checks dependencies, such as: • Write-after-write (WAW) • Primary and unique constraints A transaction can have a WAW dependency, which means a transaction updates or deletes a row, that has been inserted or updated by a dependent transaction. This can occur, for example, in a master/detail relationship of primary (or unique) and mandatory foreign key constraints. To understand the difference between the NONCONFLICT_ONLY and the NOCASCADE_FORCE options, assume that the T1 transaction changes rows R1, R2 and R3 and the T2 transaction changes rows R1, R3 and R4. In this scenario, both transactions update row R1, so it is a "conflicting" row. The T2 transaction has a WAW dependency on the T1 transaction. With the NONCONFLICT_ONLY option, R2 and R3 are backed out, because there is no conflict and it is assumed that you know best, what to do with the R1 row. With the NOCASCADE_FORCE option, all three rows (R1, R2, and R3) are backed out. Note: This screenshot is not part of the workflow example, but shows additional details of a more complex situation.)
Oracle Database 11g: New Features for Administrators 11 - 25
Choosing Other Back-out Options (Continued) The Flashback Transaction Wizard works as follows: • If the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option fails (because there are dependent transactions), You can change the recovery options. • With the NONCONFLICT_ONLY option, non-conflicting rows within a transaction are backed out, which implies that database consistency is maintained (although the transaction atomicity is broken for the sake of data repair). • If you want to forcibly back out the given transactions, without paying attention to the dependent transactions, use the NOCASCADE_FORCE option. The server will just execute the compensating DML commands for the given transactions in reverse order of their commit times. If no constraints break, you can proceed to commit the changes, or else roll back. • To initiate the complete removal of the given transactions and all their dependents in a post order fashion, use the CASCADE option. (Note: This screenshot is not part of the workflow example, but shows additional details of a more complex situation.)
Oracle Database 11g: New Features for Administrators 11 - 26
Final Steps Without EM
After choosing your back-out option, the dependency report is generated in the DBA_FLASHBACK_TXN_STATE and DBA_FLASHBACK_TXN_REPORT tables • Review the dependency report which shows all transactions which were backed out • Commit the changes to make them permanent • Roll back to discard the changes
Final Steps Without EM The DBA_FLASHBACK_TXN_STATE view contains the current state of a transaction: if it is alive in the system or effectively backed out. This table is atomically maintained with the compensating transaction. For each compensating transaction, there could be multiple rows, where each row provides the dependency relation between the transactions that have been compensated by the compensating transaction. The DBA_FLASHBACK_TXN_REPORT view provides detailed information about all compensating transactions that have been committed in the database. Each row in this view is associated with one compensating transaction. For a detailed description of these tables, see the Oracle Database Reference.
Oracle Database 11g: New Features for Administrators 11 - 27
LogMiner
• Powerful audit tool for Oracle databases • Direct access to redo logs • User interfaces: – SQL command-line – Graphical User Interface (GUI)
LogMiner What You Already Know: LogMiner is a powerful audit tool for Oracle databases, allowing you to easily locate changes in the database, enabling sophisticated data analyses, and providing undo capabilities to rollback logical data corruptions or user errors. LogMiner directly accesses the Oracle redo logs, which are complete records of all activities performed on the database, and the associated data dictionary. The tool offers two interfaces: SQL command-line and a GUI interface. What Is New: Enterprise Manager Database Control now has an interface for LogMiner. In prior releases, administrators were required to install and use the standalone Java Console for LogMiner. With this new interface, administrators have a task-based, intuitive approach to using LogMiner. This improves the manageability of LogMiner. In Enterprise Manager, select Availability > View and Manage Transactions. LogMiner supports the following activities: • Specifying query parameters • Stopping the query and showing partial results, if the query takes a long time • Partial querying, then showing the estimated complete query time • Saving the query result • Re-mining or refining the query based on initial results • Showing transaction details, dependencies and compensating "undo" SQL script • Flashing back and committing the transaction Oracle Database 11g: New Features for Administrators 11 - 28
Querying Transactions If you need, for example, to report on the lifecycle of a specific column or research transaction details, you may not know the specific transaction id. So your first step is to query the redo stream (internally done either in transaction tables or with LogMiner). In Enterprise Manager, select Availability > Browse Transactions. Specify a timeframe and either the username or table in question to start a query. The Start Time field defaults to the start time of the online log file. You have these basic options: • If you know at least one table involved in the transaction, then you must provide either a time range or SCN range as additional filter criteria. • If you know the username, but not the table, then you may want to know what else the OS user did in this time frame.
Oracle Database 11g: New Features for Administrators 11 - 29
Refining the Query You can refine the query with advanced query options. Click Advanced Query on the LogMiner page to specify additional column values and/or additional LogMiner WHERE clauses, such as: WHERE session_info= This matches all transactions initiated from the host. Click the Info icon to view all LogMiner options. You can select different combinations to form a where clause. Once the where clause is formed, you can edit it further by typing directly in the where clause text box. For example, if you want to find transactions that modified a certain column, you choose REDO_VALUE, column name and is present. If you then want to refine the query further, to show all transactions where the changed value is twice greater than the original value, you can specify a WHERE clause like this one: WHERE DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') > 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY');
Oracle Database 11g: New Features for Administrators 11 - 31
Reviewing Transactions Once you click Continue on the first LogMiner page, you see the Processing: Mining Transactions page. It displays among others, how many transactions were found and the approximate time to complete the operation. You can stop the query at any time, and review the results found so far.
Oracle Database 11g: New Features for Administrators 11 - 32
Reviewing Transactions You can review transaction details. Flashback Transaction is covered earlier in this lesson. Click OK to return to the "LogMiner Results" page.
Oracle Database 11g: New Features for Administrators 11 - 34
Summary
In this lesson, you should have learned how to: • Describe transactions and undo • Describe undo backup optimization • Prepare your database for flashback • Create, change and drop a flashback data archive • View flashback data archive metadata • Setup flashback transaction prerequisites • Query transactions with and without dependencies • Choose back-out options and flash back transactions • Using EM LogMiner • Review transaction details 11 - 35
11g Infrastructure Grid: Server Manageability 12 - 1
Objectives
After completing this lesson, you should be able to: • Setup Automatic Diagnostic Repository • Use Support Workbench • Run health checks • Use SQL Repair Advisor
Self-managing Database: Oracle Database 10g Self-managing is an ongoing goal for the Oracle Database. Oracle Database 10g mark the beginning of a huge effort to render the database more easy to use. With Oracle Database 10g, the focus for self-managing was more on performance and resources.
11g Infrastructure Grid: Server Manageability 12 - 3
Self-managing Database: The Next Generation
Manage Performance and Resources Manage Change Manage Fault
Self-managing Database: The Next Generation Oracle Database 11g adds two more important axes to the overall self-management goal: Change management, and fault management. In this lesson we concentrate on the fault management capabilities introduced in Oracle Database 11g.
11g Infrastructure Grid: Server Manageability 12 - 4
Oracle Database 11g R1 Fault Management
Goal: Reduce Time to Resolution Change Assurance and Automatic Health Checks
Oracle Database 11g R1 Fault Management The goals of the fault diagnosability infrastructure are the following: • Detecting problems proactively • Limiting damage and interruptions after a problem is detected • Reducing problem diagnostic time • Reducing problem resolution time • Simplifying customer interaction with Oracle Support
11g Infrastructure Grid: Server Manageability 12 - 5
Ease Diagnosis: Automatic Diagnostic Workflow An always-on, in-memory tracing facility enables database components to capture diagnostic data upon first failure for critical errors. A special repository, called Automatic Diagnostic Repository, is automatically maintained to hold diagnostic information about critical error events. This information can be used to create incident packages to be sent to Oracle Support Services for investigation. Here is a possible workflow for a diagnostic session: 1. Incident causes an alert to be raised in EM. 2. DBA can view alert via EM Alert page. 3. DBA can drill down to incident and problem details. 4. DBA or Oracle Support Services can decide or ask for that info to be packaged and sent to Oracle Support Services via MetaLink. DBA can add files to the data to be packaged automatically.
11g Infrastructure Grid: Server Manageability 12 - 6
Automatic Diagnostic Repository DIAGNOSTIC_DEST
Support Workbench BACKGROUND_DUMP_DEST CORE_DUMP_DEST
Automatic Diagnostic Repository (ADR) The ADR is a file-based repository for database diagnostic data such as traces, incident dumps and packages, the alert log, health monitor reports, core dumps, and more. It has a unified directory structure across multiple instances and multiple products stored outside of any database. It is therefore available for problem diagnosis when the database is down. Beginning with Oracle Database 11g R1, the database, Automatic Storage Management (ASM), Cluster Ready Services (CRS), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory. For example, in a Real Application Clusters environment with shared storage and ASM, each database instance and each ASM instance has a home directory within the ADR. ADR's unified directory structure, consistent diagnostic data formats (UTS) across products and instances, and a unified set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances. Starting with Oracle Database 11g R1, the traditional …_DUMP_DEST initialization parameters are ignored. The ADR root directory is known as the ADR base. Its location is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null, the database sets DIAGNOSTIC_DEST upon startup as follows: If environment variable ORACLE_BASE is set, DIAGNOSTIC_DEST is set to $ORACLE_BASE. If environment variable ORACLE_BASE is not set, DIAGNOSTIC_DEST is set to $ORACLE_HOME/log.
11g Infrastructure Grid: Server Manageability 12 - 7
Automatic Diagnostic Repository (ADR) Within ADR base, there can be multiple ADR homes, where each ADR home is the root directory for all diagnostic data for a particular instance of a particular Oracle product or component. The location of an ADR home for a database is shown on the above graphic. Also, two alert files are now generated. One is textual, exactly like the alert file used with previous releases of the Oracle Database and is located under the TRACE directory of each ADR home. In addition, an alert message file conforming to the XML standard is stored in the ALERT subdirectory inside the ADR home. You can view the alert log in text format (with the XML tags stripped) with Enterprise Manager and with the ADRCI utility. The graphic on the slide shows you the directory structure of an ADR home. The INCIDENT directory contains multiple subdirectories, where each subdirectory is named for a particular incident, and where each contains dumps pertaining only to that incident. The HM directory contains the checker run reports generated by the Heath Monitor. There is also a METADATA directory that contains important files for the repository itself. You can compare this to a database dictionary. This dictionary can be queried using ADRCI. The ADR Command Interpreter (ADRCI) is utility that enables you to perform all of the tasks permitted by the Support Workbench, but in a command-line environment. ADRCI also enables you to view the names of the trace files in the ADR, and to view the alert log with XML tags stripped, with and without content filtering. In addition, you can use V$DIAG_INFO to list some important ADR locations.
11g Infrastructure Grid: Server Manageability 12 - 8
ADRCI the ADR Command Line Tool
• Allows interaction with ADR from OS prompt • Can invoke IPS with command line instead of EM • DBAs should use EM Support Workbench though: – Leverages same toolkit / libraries that ADRCI is built upon – Easy to follow GUI ADRCI> show incident ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orcl/orcl: ***************************************************************************** INCIDENT_ID PROBLEM_KEY CREATE_TIME ------------ -------------------------------------- --------------------------------1681 ORA-600_dbgris01:1,_addr=0xa9876541 17-JAN-07 09.17.44.843125000… 1682 ORA-600_dbgris01:12,_addr=0xa9876542 18-JAN-07 09.18.59.434775000… 2 incident info records fetched ADRCI>
ADRCI the ADR Command Line Tool ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables you to: • View diagnostic data within the Automatic Diagnostic Repository (ADR). • Package incident and problem information into a zip file for transmission to Oracle Support. ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands. There is no need to log in to ADRCI, because the data in the ADR is not intended to be secure. ADR data is secured only by operating system permissions on the ADR directories. The easiest way to package and otherwise manage diagnostic data is with the Support Workbench of Oracle Enterprise Manager. ADRCI provides a command-line alternative to most of the functionality of Support Workbench, and adds capabilities such as listing and querying trace files. The above example shows you an ADRCI session where you are listing all open incidents stored in ADR. Note: For more information about ADRCI, refer to the Oracle Database Utilities guide.
11g Infrastructure Grid: Server Manageability 12 - 9
V$DIAG_INFO
SQL> SELECT * FROM V$DIAG_INFO;
NAME ------------------Diag Enabled ADR Base ADR Home Diag Trace Diag Alert Diag Incident Diag Cdump Health Monitor Default Trace File Active Problem Count Active Incident Count
V$DIAG_INFO The V$DIAG_INFO view lists all important ADR locations: • ADR Base: Path of ADR base • ADR Home: Path of ADR home for the current database instance • Diag Trace: Location of the text alert log and background/foreground process trace files • Diag Alert: Location of an XML version of the alert log • … • Default Trace File: Path to the trace file for your session. SQL Trace files are written here.
11g Infrastructure Grid: Server Manageability 12 - 10
Location for Diagnostic Traces The above table describes the different classes of trace data and dumps that reside both in Oracle Database 10g and Oracle Database 11g. With Oracle Database 11g, there is no distinction between foreground and background traces files. Both types of files go into the $ADR_HOME/trace directory. All non-incident traces are stored inside the TRACE subdirectory. This is the main difference compared with previous releases where critical error information is dumped into the corresponding process trace files instead of incident dumps. Incident dumps are placed in files separated from the normal process trace files starting with Oracle Database 11g. Note: The main difference between a trace and a dump is that a trace is more of a continuous output such as when SQL tracing is turned on, and a dump is a one-time output in response to an event such as an incident. Also, a core is a binary memory dump that is port specific.
11g Infrastructure Grid: Server Manageability 12 - 11
Viewing the Alert Log Using Enterprise Manager You can view the alert log with a text editor, with Enterprise Manager, or with the ADRCI utility. To view the alert log with Enterprise Manager: 1. Access the Database Home page in Enterprise Manager. 2. Under Related Links, click Alert Log Contents. The View Alert Log Contents page appears. 3. Select the number of entries to view, and then click Go.
11g Infrastructure Grid: Server Manageability 12 - 12
Viewing the Alert Log Using ADRCI adrci>>show alert –tail ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* 2007-04-16 22:10:50.756000 -07:00 ORA-1654: unable to extend index SYS.I_H_OBJ#_COL# by 128 in tablespace SYSTEM 2007-04-16 22:21:20.920000 -07:00 Thread 1 advanced to log sequence 400 Current log# 3 seq# 400 mem# 0: +DATA/orcl/onlinelog/group_3.266.618805031 Current log# 3 seq# 400 mem# 1: +DATA/orcl/onlinelog/group_3.267.618805047 … Thread 1 advanced to log sequence 401 Current log# 1 seq# 401 mem# 0: +DATA/orcl/onlinelog/group_1.262.618804977 Current log# 1 seq# 401 mem# 1: +DATA/orcl/onlinelog/group_1.263.618804993 DIA-48223: Interrupt Requested - Fetch Aborted - Return Code [1] adrci>> adrci>>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'" ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* adrci>>
Viewing the Alert Log Using ADRCI You can also use ADRCI to view the content of your alert log file. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. Ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown on the slide. Then use the SHOW ALERT command. To limit the output, you can look at the last records using the –TAIL option. This displays the last portion of the alert log (about 20 to 30 messages), and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform live monitoring of the alert log. Press CTRL-C to stop waiting and return to the ADRCI prompt. You can also specify the amount of lines to be print if you want. You can also filter the output of the SHOW ALERT as shown on the bottom example on the slide where you only want to display alert log messages that contain the string 'ORA-600'. Note: ADRCI allows you to spool the output to a file exactly like in SQL*Plus.
11g Infrastructure Grid: Server Manageability 12 - 13
Problems and Incidents Problem ID
Critical Error
Problem Aut o Flood control
Problem Key
ma tic
Collecting
ally
Incident lly nua Ma
Incident Status
Incident ID
Automatic transition
Ready Tracking Data-Purged Closed
DBA Traces
ADR
MMON Auto-purge
Non-critical Error Package to be sent to Oracle Support
Problems and Incidents To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure introduces two concepts for Oracle Database: problems and incidents: • A problem is a critical error in the database. Problems are tracked in the ADR. Each problem is identified by a unique problem ID and has a problem key, which is a set of attributes that describe the problem. The problem key includes the ORA error number, error parameter values, and other information. Here is a possible list of critical errors: - All internal Errors – ORA-60x errors - All system access violations – (SEGV, SIGBUS) - ORA-4020 (Deadlock on library object), ORA-8103 (Object no longer exists ), ORA1410 (Invalid ROWID), ORA-1578 (Data block corrupted), ORA-29740 (Node eviction), ORA-255 (Database is not mounted), ORA-376 (File cannot be read at this time), ORA-4030 (Out of process memory), ORA-4031 (Unable to allocate more bytes of shared memory), ORA-355 (The change numbers are out of order), ORA-356 (Inconsistent lengths in change description), ORA-353 (Log corruption), ORA-7445 (Operating System exception). • An incident is a single occurrence of a problem. When a problem occurs multiple times, as is often the case, an incident is created for each occurrence. Incidents are tracked in the ADR. Each incident is identified by a numeric incident ID, which is unique within an ADR home.
11g Infrastructure Grid: Server Manageability 12 - 14
Problems and Incidents (Continued) When an incident occurs, the database makes an entry in the alert log, gathers diagnostic data about the incident (a stack trace, the process state dump, and other dumps of important data structures), tags the diagnostic data with the incident ID, and stores the data in an ADR subdirectory created for that incident. Each incident has a problem key and is mapped to a single problem. Two incidents are considered to have the same root cause if their problem keys match. Large amounts of diagnostic information can be created very quickly if a large number of sessions stumble across the same critical error. Having the diagnostic information for more than a small number of the incidents is not required. That is why ADR provides flood control so that only a certain number of incidents under the same problem can be dumped in a given time interval. Note that flood controlled incidents still generate incidents; they only skip the dump actions. By default only five dumps per hour for a given problem are allowed. You can view a problem as a set of incidents that are perceived to have the same symptoms. The main reason to introduce this concept is to make it easier for users to manage errors on their systems. For example, a symptom that occurs 20 times should only be reported to Oracle once. Mostly, you will manage problems instead of incidents using IPS to package a problem to be sent to Oracle Support. Most commonly incidents are automatically created when a critical error occurred. However, you are also allowed to create an incident manually, via the GUI provided by the EM Support Workbench. Manual incident creation are mostly done when you want to report problems that are not accompanied by critical errors raised inside the Oracle code. As time goes by, more and more incidents will be accumulated in the ADR. A retention policy allows you to specify how long to keep the diagnostic data. ADR incidents are controlled by two different policies: • The incident metadata retention policy controls how long the metadata is kept around. This policy has a default setting of one year. • The incident files and dumps retention policy controls how long generated dump files are kept around. This policy has a default setting of one month. You can change these setting using the Incident Package Configuration link on the EM Support Workbench page. Inside the RDBMS component, MMON is responsible for purging automatically expired ADR data.
11g Infrastructure Grid: Server Manageability 12 - 15
Problems and Incidents (Continued) The Status of an incident reflects the state of the incident. An Incident can be in any one of the following states: • Collecting: the incident has been newly created and is in the process of collecting diagnostic information. In this state the incident data can be incomplete and should not be packaged, and should be viewed with discretion. • Ready: the data collection phase has completed. The incident is now ready to be used for analysis, or to be packaged to be sent to Oracle Support. • Tracking: the DBA is working on the incident, and prefer the incident to be kept in the repository indefinitely. You have to manually change the incident status to this value. • Closed: the incident is now in a done state. In this state, ADR can elect the incident to be purged after it passes its retention policy. • Data-Purged: the associated files have been removed from the incident. In some case, even if the incident files may still be physically around, it is not advisable for users to look at them as they can be in an inconsistent state. Note that the incident metadata itself for the incident is still valid for viewing. If an incident has been in either the Collection or the Ready state for over twice its retention length, the incident automatically moves to the Closed state. You can manually purged incident files. For simplicity, problem metadata is internally maintained by ADR. Problems are automatically created when the first incident (of the problem key) occurs. The Problem metadata is removed after it last incident is removed from the repository. Note: It is not possible to disable automatic incident creation for critical errors.
11g Infrastructure Grid: Server Manageability 12 - 16
Incident Packaging Service (IPS)
• Uses rules to correlate all relevant dumps and traces from ADR for a given problem and allow you to package them to ship to Oracle Support • Rules can involve files that were generated around the same time, associated with the same client, same error codes, etc. • DBAs can explicitly add/edit or remove files before packaging • Access IPS through either EM or ADRCI
Incident Packaging Service With incident packaging service (IPS) you can automatically and easily gather all diagnostic data (traces, dumps, health check reports, SQL test cases, and more) pertaining to a critical error and package the data into a zip file suitable for transmission to Oracle Support. Because all diagnostic data relating to a critical error are tagged with that error's incident number, you do not have to search through trace files, dump files, and so on to determine the files that are required for analysis; the incident packaging service identifies all required files automatically and adds them to the package.
11g Infrastructure Grid: Server Manageability 12 - 17
• Incident Package is a logical structure inside ADR representing one or more problems • A package is a zip file containing dump information related to an incident package • By default only the first and last ADR three incidents of each Home problem are included to alert cdump incpkg an incident package … pkg_1 • You can generate complete or incremental zip files
Incident Packages To upload diagnostic data to Oracle Support Services, you first collect the data into an incident package. When you create an incident package, you select one or more problems to add to the incident package. The Support Workbench then automatically adds to the incident package the incident information, trace files, and dump files associated with the selected problems. Because a problem can have many incidents (many occurrences of the same problem), by default only the first three and last three incidents for each problem are added to the incident package. You can change this default number on the Incident Packaging Configuration page accessible from the Support Workbench page. After the incident package is created, you can add any type of external file to the incident package, remove selected files from the incident package, or edit selected files in the incident package to remove sensitive data. An incident package is a logical construct only, until you create a physical file from the incident package contents. That is, an incident package starts out as a collection of metadata in the ADR. As you add and remove incident package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support Services, you either invoke a Support Workbench or ADRCI function that gathers all the files referenced by the metadata, places them into a zip file, and then uploads the zip to MetaLink.
11g Infrastructure Grid: Server Manageability 12 - 18
EM Support Workbench Overview
• Wizard that guides you through the process of handling problems • You can perform the following tasks with the Support Workbench: – – – – – – –
12 - 19
View details on problems and incidents Run heath checks Generate additional diagnostic data Run advisors to help resolve problems Create and track service requests through MetaLink Generate incident packages Close problems once resolved
EM Support Workbench Overview The Support Workbench is an Enterprise Manager wizard that helps you through the process of handling critical errors. It displays incident notifications, presents incident details, and enables you to select incidents for further processing. Further processing includes running additional health checks, invoking the incident packaging service (IPS) to package all diagnostic data about the incidents, adding SQL test cases and selected user files to the package, filing a technical assistance request (TAR) with Oracle Support, shipping the packaged incident information to Oracle Support, and tracking the TAR through its lifecycle. You can perform the following tasks with the Support Workbench: • View details on problems and incidents. • Manually run health checks to gather additional diagnostic data for a problem. • Generate additional dumps and SQL test cases to add to the diagnostic data for a problem. • Run advisors to help resolve problems. • Create and track a service request through MetaLink, and add the service request number to the problem data. • Collect all diagnostic data relating to one or more problems into an incident package and then upload the incident package to Oracle Support Services. • Close the problem when the problem is resolved.
11g Infrastructure Grid: Server Manageability 12 - 19
Oracle Configuration Manager Enterprise Manager Support Workbench uses Oracle Configuration Manager to upload the physical files generated by IPS to MetaLink. If Oracle Configuration Manager is not installed or properly configured, the upload may fail. In this case, a message is displayed with a path to the incident package zip file and a request that you upload the file to Oracle Support manually. You can upload manually with MetaLink. During Oracle Database 11g installation, the Oracle Universal Installer has a special Oracle Configuration Manager Registration screen shown above. On that screen you need to select the Enable check box and accept license agreement before you can enter your Customer Identification Number (CSI), your MetaLink account username, and your country code. If you do not configure Oracle Configuration Manager, you will still be able to manually upload incident packages to MetaLink. Note: For more information about Oracle Configuration Manager, see Oracle Configuration Manager Installation and Adminstration Guide, available at the following URL: http://www.oracle.com/technology/documentation/oem.html
11g Infrastructure Grid: Server Manageability 12 - 20
EM Support Workbench Roadmap 1
7
6
View critical error alerts in Enterprise Manager
Close incidents
2
View problem details
Track the SR and implement repairs
3
Gather additional diagnostic information
Package and upload diagnostic data to Oracle Support
EM Support Workbench Roadmap The above graphic is a summary of the tasks that you complete to investigate, report, and in some cases, resolve a problem using Enterprise Manager Support Workbench: 1. Start by accessing the Database Home page in Enterprise Manager, and reviewing critical error alerts. Select an alert for which to view details. 2. Examine the problem details and view a list of all incidents that were recorded for the problem. Display findings from any health checks that were automatically run. 3. Optionally, run additional health checks and invoke the SQL Test Case Builder, which gathers all required data related to a SQL problem and packages the information in a way that enables the problem to be reproduced at Oracle Support. 4. Create a service request with MetaLink and optionally record the service request number with the problem information. 5. Invoke a wizard that automatically packages all gathered diagnostic data for a problem and uploads the data to Oracle Support. Optionally edit the data to remove sensitive information before uploading. 6. Optionally maintain an activity log for the service request in the Support Workbench. Run Oracle advisors to help repair SQL failures or corrupted data. 7. Set status for one, some, or all incidents for the problem to Closed.
11g Infrastructure Grid: Server Manageability 12 - 21
View Critical Error Alerts in Enterprise Manager You begin the process of investigating problems (critical errors) by reviewing critical error alerts on the Database Home page. To view critical error alerts, access the Database Home page in Enterprise Manager. From the Home page, you can look at the Diagnostic Summary section from where you can click the Active Incidents link if there are incidents. You can also use the Alerts section and look for critical alerts flagged as Incidents. When you click the Active Incidents link you end up on the Support Workbench page from where you can retrieve details about all problems and corresponding incidents. From there, you can also retrieve all Health Monitor checker run and created packages. Note: The tasks described in this section are all Enterprise Manager–based. You can also accomplish all of these tasks with the ADRCI command-line utility and PL/SQL package procedures. See Oracle Database Utilities for more information on the ADRCI utility.
11g Infrastructure Grid: Server Manageability 12 - 22
View Problem Details From the Problems sub-page on the Support Workbench page, click the ID of the problem you want to investigate. This takes you to the corresponding Problem Details page. On this page, you can see all incidents that are related to your problem. You can associate your problem with a MetaLink service request and bug number. In the Investigate and Resolve section of the page, you have a Self Service sub-page that has direct links to the operation you can do on this problem. In the same section, the Oracle Support sub-page has direct links to MetaLink. The Activity Log sub-page shows you the system-generated operations that have occurred on your problem so far. This sub-page allows you to add your own comments while investigating your problem. From the Incidents sub-page, you can click on a related incident ID to get to the corresponding Incident Details page.
11g Infrastructure Grid: Server Manageability 12 - 23
View Incident Details Once on the Incident Details page, the Dump Files sub-page appears and lists all corresponding dump files. You can then click the goggles for a particular dump file to visualize the file content with its various sections.
11g Infrastructure Grid: Server Manageability 12 - 24
View Incident Details On the Incident Details page, click Checker Findings to view the Checker Findings sub-page. This page displays findings from any health checks that were automatically run when the critical error was detected. Most of the time, you will have the possibility to select one or more findings, and invoke an advisor to fix the issue.
11g Infrastructure Grid: Server Manageability 12 - 25
Create a Service Request Before you can package and upload diagnostic information for the problem to Oracle Support, you must create a service request. To create a service request, you need to go to MetaLink first. MetaLink can be accessed directly from the Problem Details page when you cclik the Go to Metalink button in the Investigate and Resolve section of the page. Once on MetaLink, log in and create a service request in the usual manner. Once done, you have the possibility to enter that service request for your problem. This is entirely optional and is for your reference only. In the Summary section, click the Edit button that is adjacent to the SR# label, and in the window that opens, enter the SR#, and then click OK.
11g Infrastructure Grid: Server Manageability 12 - 26
Package and upload diagnostic data to Oracle Support
Package and upload diagnostic data to Oracle Support Support Workbench provides two methods for creating and uploading an incident package: the Quick Packaging method and the Advanced Packaging method. The example on the slide shows you how to use Quick Packaging. Quick Packaging is a more automated method with a minimum of steps. You select a single problem, provide an incident package name and description, and then schedule the incident package upload, either immediately or at a specified date and time. Support Workbench automatically places diagnostic data related to the problem into the incident package, finalizes the incident package, creates the zip file, and then uploads the file. With this method, you do not have the opportunity to add, edit, or remove incident package files or add other diagnostic data such as SQL test cases. To package and upload diagnostic data to Oracle Support: 1. On the Problem Details page, in the Investigate and Resolve section, click Quick Package. The Create New Package page of the Quick Packaging wizard appears. 2. Enter a package name and description. 3. If you did not record the service request number in the previous task, enter it here. 4. Click Next, and then proceed with the remaining pages of the Quick Packaging wizard. Click Submit on the Review page to upload the package.
11g Infrastructure Grid: Server Manageability 12 - 27
Track the SR and Implement Repairs After uploading diagnostic information to Oracle Support, you might perform various activities to track the service request and implement repairs. Among these activities are the following: Add an Oracle bug number to the problem information. To do so, on the Problem Details page, click the Edit button that is adjacent to the Bug# label. This is for your reference only. Add comments to the problem activity log. To do so, complete the following steps: 1. Access the Problem Details page for the problem. 2. Click Activity Log to display the Activity Log subpage. 3. In the Comment field, enter a comment, and then click Add Comment. Your comment is recorded in the activity log. Respond to a request by Oracle Support to provide additional diagnostics. Your Oracle Support representative might provide instructions for gathering and uploading additional diagnostics.
11g Infrastructure Grid: Server Manageability 12 - 28
Track the SR and Implement Repairs From the Incident Details page, you can run an Oracle advisor to implement repairs. Access the suggested advisor in one of the following ways: • In the Self-Service tab of the Investigate and Resolve section of the Problem Details page. • On the Checker Findings sub-page of the Incident Details page as shown on the slide. The advisors that help you repair critical errors are: • Data Recovery Advisor: Corrupted blocks, corrupted or missing files, and other data failures. • SQL Repair Advisor: SQL statement failures.
11g Infrastructure Grid: Server Manageability 12 - 29
Close Incidents and Problems When a particular incident is no longer of interest, you can close it. By default, closed incidents are not displayed on the Problem Details page. All incidents, whether closed or not, are purged after 30 days. You can disable purging for an incident on the Incident Details page. To close incidents: 1. Access the Support Workbench home page. 2. Select the desired problem, and then click View. The Problem Details page appears. 3. Select the incidents to close and then click Close. A confirmation page appears. 4. Click Yes on the Confirmation page to close your incident.
11g Infrastructure Grid: Server Manageability 12 - 30
Incident Packaging Configuration As already seen, you can configure various aspects of retention rules and packaging generation. Using Support Workbench, you can access the Incident Packaging configuration page from the Related Links section of the Support Workbench page by clicking the Incident Package Configuration link. Here are the parameters you can change: • Incident Metadata Retention Period: Metadata is basically information about the data. As for incidents, it is the incident time, ID, size, problem, and so forth. Data is the actual contents of an incident, such as traces. • Cutoff Age for Incident Inclusion: This value includes incidents for packaging that are in the range to now. If the cutoff date is 90, for instance, the system only includes the incidents that are within the last 90 days. • Leading Incidents Count: For every problem included in a package, the system selects a certain number of incidents from the problem from the beginning (leading) and the end (trailing). For example, if the problem has 30 incidents, and the leading incident count is 5 and the trailing incident count is 4, the system includes the first 5 incidents and the last 4 incidents. • Trailing Incidents Count: See above.
11g Infrastructure Grid: Server Manageability 12 - 31
Incident Packaging Configuration (Continued) • Correlation Time Proximity: This parameter is the exact time interval that defines "happened at the same time". There is a concept of correlated incidents/problems to a certain incident/problem. That is, what problems seem to have a connection with a said problem. One criterion for correlation is time correlation: find the incidents that happened at the same time as the incidents in a problem.
11g Infrastructure Grid: Server Manageability 12 - 32
Custom Packaging: Create New Package Custom Packaging is a more manual method than Quick Packaging, but gives you greater control over the incident package contents. You can create a new incident package with one or more problems, or you can add one or more problems to an existing incident package. You can then perform a variety of operations on the new or updated incident package, including: • Adding or removing problems or incidents • Adding, editing, or removing trace files in the incident package • Adding or removing external files of any type • Adding other diagnostic data such as SQL test cases • Manually finalizing the incident package and then viewing incident package contents to determine if you must edit or remove sensitive data or remove files to reduce incident package size. With the Custom Packaging method, you create the zip file and request upload to Oracle Support as two separate steps. Each of these steps can be performed immediately or scheduled for a future date and time. To package and upload a problem with custom packaging: 1. In the Problems sub-page at the bottom of the Support Workbench home page, select the first problem that you want to package, and then click Package. 2. On the Package: Select packaging mode, select the Custom Packaging option, and then click Continue. 3. The Custom Packaging: Select Package page appears. To create a new incident package, select 11g Infrastructure Server Manageability 12 - 33 and then click the Create New Package option, enter anGrid: incident package name and description, OK. To add the selected problems to an existing incident package, select the Select from Existing
Custom Packaging: Manipulate Incident Package On the Customize Package, you get the confirmation that your new package has been created. This page displays the incidents that are contained in the incident package, plus a selection of packaging tasks to choose from. You run these tasks against the new incident package or the updated existing incident package. As you can see from the slide, you can exclude/include incidents or files as well as many other possible tasks.
11g Infrastructure Grid: Server Manageability 12 - 34
Custom Packaging: Finalize Incident Package Finalizing an incident package is used to add correlated files from other components, such as Health Monitor to the package. Recent trace files and log files are also included in the package. You can finalize a package by clicking the Finish Contents Preparation link in the Packaging Tasks section as shown on the slide. A confirmation page is displayed that lists all files that will be part of the physical package.
11g Infrastructure Grid: Server Manageability 12 - 35
Custom Packaging: Generate Package Once your incident package has been finalized, you can generate the package file. You need to go back to the corresponding package page and click Generate Upload File. The Generate Upload File page appears. There, Select the Full or Incremental option to generate a full incident package zip file or an incremental incident package zip file. For a full incident package zip file, all the contents of the incident package (original contents and all correlated data) are always added to the zip file. For an incremental incident package zip file, only the diagnostic information that is new or modified since the last time that you created a zip file for the same incident package is added to the zip file. Once done, select the Schedule and click Submit. If you scheduled the the generation immediately, a Processing page appears until packaging is finished. This is followed by the Confirmation page where you can click OK. Note: The Incremental option is unavailable if a physical file was never created for the incident package.
11g Infrastructure Grid: Server Manageability 12 - 36
Custom Packaging: Upload Package Once you generated the physical package, you can go back to the Customize Package page from where you can click the View/Send Uploaded Files link from the Packaging Tasks section. This takes you to the View/Send Upload Files page from where you can select your package, and click the Send to Oracle button. The Send to Oracle page appears. There, you can enter the service request number for your problem, and choose a Schedule. You can then click Submit.
11g Infrastructure Grid: Server Manageability 12 - 37
Viewing and Modifying Incident Packages Once a package is created, you always have the possibility to modify it through customization. For example, go to the Support Workbench page and click the Packages tab. This takes you to the Packages sub-page. From this page, you can select a package and delete it, or click the package link to go to the Package Details page. There, you can click Customize to go to the Customize Package page from where you can manipulate your package by adding/removing problem, incidents, or files.
11g Infrastructure Grid: Server Manageability 12 - 38
Create User-reported Problems Critical errors generated internally to the database are automatically added to the Automatic Diagnostic Repository (ADR) and tracked in the Support Workbench. However, there may be a situation in which you want to manually add a problem that you noticed to the ADR so that you can put that problem through Support Workbench workflow. An example of such a situation would be if the performance of the database or of a particular query suddenly noticeably degraded. Support Workbench includes a mechanism for you to create and work with such a userreported problem. To create a user-reported problem, go to the Support Workbench page, and click Create UserReported Problem link in the Related Links section. This takes you to the Create User-Reported Problem page from where you are asked to run a corresponding advisor before continuing. This is only necessary if you are not sure about your problem. However, if you already know exactly what is going on, select the issue that describes most the type of problem you are encountering and click Continue with Creation of Problem. By clicking this button, you basically create a pseudo problem inside Support Workbench. This allows you to manipulate this problem using the previously seen Support Workbench workflow for handling critical errors. So, you end up on a Problem Details page for your issue. Note that at first the problem does not have any diagnostic data associated with it. At this point, you need to create a package and upload necessary trace files by customizing that package. This has already been described previously. 11g Infrastructure Grid: Server Manageability 12 - 39
Invoking IPS Using ADRCI INCIDENT
IPS SET CONFIGURATION
PROBLEM | PROBLEM KEY
IPS CREATE PACKAGE SECONDS | TIME INCIDENT NEW INCIDENTS
Invoking IPS Using ADRCI Creating a package is a two-step process: you first create the logical package, and then generate the physical package as a zip file. Both steps can be done using ADRCI commands. To create a logical package, the command IPS CREATE PACKAGE is used. There are several variants of this command, that allow you to choose the contents: • IPS CREATE PACKAGE creates an empty package. • IPS CREATE PACKAGE PROBLEMKEY creates a package based on problem key. • IPS CREATE PACKAGE PROBLEM creates a package based on problem ID. • IPS CREATE PACKAGE INCIDENT creates a package based on incident ID. • IPS CREATE PACKAGE SECONDS creates a package containing all incidents generated from seconds ago until now. • IPS CREATE PACKAGE TIME creates a package based on the specified time range. It's also possible to add contents to an existing package. For instance: • IPS ADD INCIDENT PACKAGE adds an incident to an existing package. • IPS ADD FILE PACKAGE adds a file inside ADR to an existing package.
11g Infrastructure Grid: Server Manageability 12 - 40
Invoking IPS Using ADRCI (Continued) IPC COPY copies files between the ADR repository and the external file system. It has two forms: • IN FILE to copy an external file into ADR, associating it with an existing package, and optionally an incident. • OUT FILE to copy a file from ADR to a location outside ADR. IPS COPY is essentially used to COPY OUT a file, edit it, and COPY IN it back into ADR. IPS FINALIZE is used to finalize package for delivery which means that other components, such as Health Monitor, are called to add their correlated files to the package. Recent trace files and log files are also included in the package. If required, this step is run automatically when a package is generated. To generate the physical file, the command IPS GENERATE PACKAGE is used. The syntax is as follows: IPS GENERATE PACKAGE IN [COMLPETE | INCREMENTAL] and generates a physical zip file for an existing logical package. The file name contains either COM for complete or INC for incremental followed by a sequence number that is incremented each time a zip file is generated. IPS SET CONFIGURATION is used to set IPS rules. Note: Refer to the Oracle Database Utilities guide for more information about ADRCI.
11g Infrastructure Grid: Server Manageability 12 - 41
Health Monitor Overview Beginning with Release 11g, Oracle Database includes a framework called Health Monitor for running diagnostic checks on various components of the database. Health Monitor checks examine various components of the database, including files, memory, transaction integrity, metadata, and process usage. These checkers generate reports of their findings as well as recommendations for resolving problems. Health Monitor checks can be run in two ways: • Reactive: The fault diagnosability infrastructure can run Health Monitor checks automatically in response to critical errors. • Manual: As a DBA, you can manually run Health Monitor health checks using either the DBMS_HM PL/SQL package or the Enterprise Manager interface. On the slide, you can see some of the checks that Health Monitor can run. For a complete description of all possible checks, look at V$HM_CHECK. These health checks fall into one of two categories: • DB-online: These checks can be run while the database is open (that is, in OPEN mode or MOUNT mode). • DB-offline: In addition to being runnable while the database is open, these checks can also be run when the instance is available and the database itself is closed (that is, in NOMOUNT mode).
11g Infrastructure Grid: Server Manageability 12 - 42
Health Monitor Overview (Continued) After a checker has run, it generates a report of its execution. This report contains information about the checker’s findings, including the priorities (low, high, or critical) of the findings, descriptions of the findings and their consequences, and basic statistics about the execution. Health Monitor generates reports in XML and stores the reports in ADR. You can view these reports using either V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager. Note: The Redo Check and the Database Cross Check are DB-offline checks. All other checks are DB-online checks. There around 25 checks you can run.
11g Infrastructure Grid: Server Manageability 12 - 43
Running Health Checks Manually: EM Example Enterprise Manager provides an interface for running Health Monitor checkers. You can find this interface in the Checkers tab on the Advisor Central page. The page lists each checker type, and you can run a checker by clicking on it and then OK on the corresponding checker page after you entered the parameters for the run. This is illustrated on the slide where you run the Data Block Checker manually. Once a check is completed, you can view the corresponding checker run details by selecting the checker run from the Results table and click Details. Checker runs can be reactive or manual. On the Findings sub-page you can see the various findings and corresponding recommendations extracted from V$HM_RUN, V$HM_FINDING and V$HM_RECOMMENDATION. If you click View XML Report on the Runs sub-page, you can view the run report in XML format. Viewing the XML report in Enterprise Manager generates the report for the first time if it is not yet generated in your ADR. You can then view the report using ADRCI without needing to generate it.
11g Infrastructure Grid: Server Manageability 12 - 44
Running Health Checks Manually: PL/SQL Example SQL> exec dbms_hm.run_check('Database Dictionary Check', 'mycheck',0,'TABLE_NAME=tab$'); SQL> set long 100000 SQL> select dbms_hm.get_run_report('mycheck') from dual; DBMS_HM.GET_RUN_REPORT('mycheck') ------------------------------------------------------------------------------- <TITLE>HM Report: mycheck Database Dictionary Check21mycheckMANUALCOMPLETED … TABLE_NAME=tab$ … Dictionary Inconsistency22FAILUREOPENCRITICAL … ……invalid column number 7 on Object tab$ FailedDamaged … Object SH.JFVTEST is referenced …
Running Health Checks Manually: PL/SQL Example You can use the DBMS_HM.RUN_CHECK procedure for running a health check. To call RUN_CHECK, supply the name of the check found in V$HM_CHECK, the name for the run (this is just a label used to retrieve reports later), and the corresponding set of input parameters for controlling its execution. You can view these parameters using the V$HM_CHECK_PARAM. On the above example, you want to run a Database Dictionary Check for TAB$ table. You call this run MYCHECK, and you do not want to set any timeout for this check. Once executed, you execute the DBMS_HM.GET_RUN_REPORT function to get the report extracted from V$HM_RUN, V$HM_FINDING and V$HM_RECOMMENDATION. The output clearly shows you that a critical error was found in TAB$. This table contains an entry for a table with an invalid number of columns. Furthermore, the report gives you the name of the damaged table in TAB$. When you call the GET_RUN_REPORT function, it generate the XML report file in the HM directory of your ADR. For the above example, the file is called HMREPORT_mycheck.hm Note: Refer to the Oracle Database PL/SQL Packages and Types Reference for more information on DBMS_HM.
11g Infrastructure Grid: Server Manageability 12 - 45
Viewing HM Reports Using the ADRCI Utility You can create and view Health Monitor checker reports using the ADRCI utility. To do that, ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown on the slide. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. You can then enter the SHOW HM_RUN command to list all the checker runs registered in the ADR repository and visible from V$HM_RUN. Locate the checker run for which you want to create a report and note the checker run name using the corresponding RUN_NAME field. The REPORT_FILE field contains a filename if a report already exists for this checker run. Otherwise, you can generate the report using the CREATE REPORT HM_RUN command as shown on the slide. To view the report, use the SHOW REPORT HM_RUN command.
11g Infrastructure Grid: Server Manageability 12 - 46
SQL Repair Advisor Overview You run the SQL Repair Advisor after a SQL statement fails with a critical error that generates a problem in ADR. The advisor analyzes the statement and in many cases recommends a patch to repair the statement. If you implement the recommendation, the applied SQL patch circumvents the failure by causing the query optimizer to choose an alternate execution plan for future executions. This is done without changing the SQL statement itself. Note: In case no workaround is found by the SQL Repair Advisor, you are still able to package the incident files and send the corresponding diagnostic data to Oracle Support.
11g Infrastructure Grid: Server Manageability 12 - 47
Accessing SQL Repair Advisor Using EM There are basically two ways to access the SQL Repair Advisor from Enterprise Manager. The first and easiest way is when you get alerted in the Diagnostic Summary section of database home page. Following a SQL statement crash that generates an incident in ADR, you are automatically alerted through the Active Incidents field. You can click on the corresponding link to get to the Support Workbench Problems page from where you can click on corresponding problem ID link. This takes you to the Problem Details page from where you can click on the SQL Repair Advisor link in the Investigate and Resolve section of the page.
11g Infrastructure Grid: Server Manageability 12 - 48
Accessing SQL Repair Advisor Using EM If the SQL statement crash incident is no longer active, you can always go to the Advisor Central page from where you can click the SQL Advisors link and choose the Click here to go to Support Workbench link in the SQL Advisor section of the SQL Advisors page. This takes you directly to the Problem Details page where you can click the SQL Repair Advisor link in the Investigate and Resolve section of the page. Note: To access the SQL Repair Advisor in case of non-incident SQL failures, you can either go to the SQL Details page or SQL Worksheet.
11g Infrastructure Grid: Server Manageability 12 - 49
Using SQL Repair Advisor from EM Once on the SQL Repair Advisor: SQL Incident Analysis page, specify a Task Name, a Task Description, and a Schedule. Once done, click Submit to schedule a SQL diagnostic analysis task. If you specified Immediately, you end up on the Processing: SQL Repair Advisor Task page that shows you the various steps of the task execution.
11g Infrastructure Grid: Server Manageability 12 - 50
Using SQL Repair Advisor from EM Once the SQL Repair Advisor task executed, you are sent to the SQL Repair Results for that task. On this page, you can see a corresponding Recommendations, and especially if SQL Patch was generated to fix your problem. If that is the case like shown on the slide, you can select the statement for which you want to apply the generated SQL Patch and click View. This takes you to the Repair Recommendations for SQL ID page from where you can ask the system to implement the SQL Patch by clicking Implement after selecting the corresponding Findings. You then get a confirmation for the implementation and you can execute again your SQL statement.
11g Infrastructure Grid: Server Manageability 12 - 51
Using SQL Repair Advisor from PL/SQL declare rep_out clob; t_id varchar2(50); begin t_id := dbms_sqldiag.create_diagnosis_task( sql_text => 'delete from t t1 where t1.a = ''a'' and rowid <> (select max(rowid) from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)', task_name => 'sqldiag_bug_5869490', problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR); dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE', dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS); dbms_sqldiag.execute_diagnosis_task (t_id); rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT); dbms_output.put_line ('Report : ' || rep_out); end; /
Using SQL Repair Advisor from PL/SQL It is also possible that you invoke the SQL Repair Advisor directly from PL/SQL. After you get alerted about an incident SQL failure, you can execute a SQL Repair Advisor task using the DBMS_SQLDIAG.CREATE_DIGNOSIS_TASK function like illustrated on the slide. You need to specify the SQL statement for which you want the analysis to be done, as well as a task name and a problem type you want to analyze (possible values are PROBLEM_TYPE_COMPILATION_ERROR, and PROBLEM_TYPE_EXECUTION_ERROR). You can then give the created task parameters using the DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER procedure. Once you are ready, you can then execute the task using the DBMS_SQLDIAG.EXECUTE_DIAGNOSIS_TASK procedure. Finally, you can get the task report using the DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK function. In the above example, it is assumed that the report asks you to implement a SQL Patch to fix the problem. You can then use the DBMS_SQLDIAG.ACCEPT_SQL_PATCH procedure to implement the SQL Patch.
11g Infrastructure Grid: Server Manageability 12 - 52
Viewing, Disabling, or Removing a SQL Patch After you apply a SQL patch with the SQL Repair Advisor, you may want to view it to confirm its presence, disable it, or remove it. One reason to remove a patch is if you install a later release of Oracle Database that fixes the problem that caused the failure in the non-patched SQL statement. To view, disable/enable, or remove a SQL Patch, access the Server page in Enterprise Manager and click SQL Plan Control link in the Query Optimizer section of the page. This takes you to the SQL Plan Control page. From there, click the SQL Patch tab. From the resulting SQL Patch sub-page, locate the desired patch by examining the associated SQL statement. Select it, and apply the corresponding task: Disable, Enable, or Delete.
11g Infrastructure Grid: Server Manageability 12 - 53
Database Repair Advisor •
Oracle provides outstanding tools for repairing problems – Lost files, corrupt blocks, etc.
• •
Analyzing the underlying problem and choosing the right solution is often the biggest component of downtime Analyzes failures based on symptoms – E.g. “Open failed” because datafiles missing
Intelligent Resolution: Database Repair Advisor Data Recovery Advisor: Enterprise Manager integrates with database health checks and RMAN to display data corruption problems, assess the extent of the problem (critical, high priority, low priority), describe the impact of the problem, recommend repair options, conduct a feasibility check of the customer-chosen option, and automate the repair process. Note: For more information about the Database Repair Advisor refer to the corresponding lesson in this course.
11g Infrastructure Grid: Server Manageability 12 - 54
Summary
In this lesson, you should have learned how to: • Setup Automatic Diagnostic Repository • Use Support Workbench • Run health checks • Use SQL Repair Advisor
Oracle Database 11g: New Features for Administrators 13 - 1
Objectives
After completing this lesson, you should be able to: • Describe your options for repairing data failure • Use the new RMAN data repair commands: – List failures – Receive repair advice – Repair failure
• Perform proactive failure checks • Query the Data Recovery Advisor views
Oracle Database 11g: New Features for Administrators 13 - 2
Repairing Data Failures
• Data Guard provides failover to a standby database, so that your operations are not affected by downtime. • Data Recovery Advisor, a new feature in the Oracle Database 11g, analyzes failures based on symptoms and determines repair strategies: – Aggregation of multiple failures for efficient repair – Presenting a single, recommended repair option – Performing automatic repairs
• The Flashback technology protects the lifecycle of a row and assists in repairing logical problems.
Repairing Data Failures A "data failure" is a missing, corrupted, or inconsistent data, log, control or other file, whose content the Oracle instance cannot access. When your database has a problem, analyzing the underlying cause and choosing the correct solution is often the biggest component of downtime. The Oracle Database 11g offers several new and enhanced tools for analyzing and repairing database problems. • Data Guard, by allowing you to failover to a standby database (that has its own copy of the data), allows you to continue operation if the primary database gets a data failure. Then, after failing over to the standby, you can take the time to repair the failed database (old primary) without worrying about the impact on your applications. There are many enhancements to Data Guard, which are addressed in separate lessons. • Data Recovery Advisor is a built-in tool that automatically diagnoses data failures and reports the appropriate repair option. If for example, Data Recovery Advisor discovers many bad blocks, it recommends restoring the entire file, rather than repairing individual blocks. So it assists you to perform the correct repair for a failure. You can either repair a data failure manually or request the Data Recovery Advisor to execute the repair for you. This decreases the amount of time to recover from a failure.
Oracle Database 11g: New Features for Administrators 13 - 3
Repairing Data Failures (continued) You can use the Flashback technology to repair logical problems. • Flashback Archive maintains persistent changes of table data for a specified period of time, allowing you to access the archived data. • Flashback Transaction allows you to back out of a transaction and all conflicting transactions with a single click. For more details, see the lesson titled "Using Flashback and LogMiner". What you already know: • RMAN automates data file media recovery (a common form of recovery that protects against logical and physical failures) and block media recovery (that recovers individual blocks rather than a whole data file). For more details, see the lesson titled "Using RMAN Enhancements". • Automatic Storage Management (ASM) protects against storage failures.
Oracle Database 11g: New Features for Administrators 13 - 4
Data Recovery Advisor
• • • •
Fast detection, analysis and repair of failures Downtime and runtime failures Minimizing disruptions for users User interfaces: – EM GUI interface – RMAN command-line
Functionality of the Data Recovery Advisor The Data Recovery Advisor automatically gathers data failure information when an error is encountered. In addition, it can proactively check for failures. In this mode, it can potentially detect and analyze data failures before a database process discovers the corruption and signals an error. (Note that repairs are always under human control.) Data failures can be very serious. For example, if your log files are missing, you cannot start your database. Some data failures (like block corruptions in data files) are not catastrophic, in that they do not take the database down or prevent you from starting the Oracle instance. The Data Recovery Advisor handles both cases: the one when you cannot start up the database (because some required database files are missing, inconsistent, or corrupted) and the one when file corruptions are discovered during runtime. The preferred way to address serious data failures is: to first failover to a standby database, if you are in a Data Guard configuration. This allows users to come back online as soon as possible. Then, you need to repair the primary cause of the data failure, but fortunately, this does not impact your users.
Oracle Database 11g: New Features for Administrators 13 - 5
User Interfaces The Data Recovery Advisor is available from Enterprise Manager (EM) Database Control and Grid Control. When failures exist, select Availability > Perform Recovery. You can also use it via the RMAN command-line. For example: rman target / nocatalog Supported Database Configurations In the current release, Data Recovery Advisor supports single-instance databases. Oracle Real Application Clusters databases are not supported. Data Recovery Advisor cannot use blocks or files transferred from a standby database to repair failures on a primary database. Also, you cannot use Data Recovery Advisor to diagnose and repair failures on a standby database. However, the Data Recovery Advisor does support failover to a standby database as a repair option (as mentioned above).
Oracle Database 11g: New Features for Administrators 13 - 6
Data Recovery Advisor
1. Assess data failures 2. List failures by severity 3. Advise on repair
Data Recovery Advisor
4. Choose and execute repair 5. Perform proactive checks
Data Recovery Advisor The automatic diagnostic workflow in Oracle Database 11g performs workflow steps for you. With the Data Recovery Advisor you only need to initiate an advise and a repair. 1. Health Monitor automatically executes checks and logs failures and their symptoms as "findings" into the Automatic Diagnostic Repository (ADR). For more details on Health Monitor, see the Diagnostics eStudy. 2. The Data Recovery Advisor consolidates findings into failures. It lists the results of previously executed assessments with failure severity (critical or high). 3. When you ask for repair advice on a failure, the Data Recovery Advisor maps failures to automatic and manual repair options, checks basic feasibility and presents you with the repair advice. 4. You can choose to manually execute a repair or request the Data Recovery Advisor to do it for you. 5. In addition to the automatic, primarily "reactive' checks of the Health Monitor and Data Recovery Advisor, Oracle recommends to additionally use the VALIDATE command as a "proactive" check.
Oracle Database 11g: New Features for Administrators 13 - 7
Data Failures Data failures are detected by checks, which are diagnostic procedures that asses the health of the database or its components. Each check can diagnose one or more failures, which are mapped to a repair. Checks can be reactive or proactive. When an error occurs in the database, "reactive checks" are automatically executed. You can also initiate "proactive checks", for example, by executing the VALIDATE DATABASE command. In Enterprise Manager select Availability > Perform Recovery.
Oracle Database 11g: New Features for Administrators 13 - 8
Listing Data Failures On the Perform Recovery page, click Perform Automated Repair. This example shows how the Data Recovery Advisor lists data failures and details. Activities, which you can initiate from the Data Recovery Advisor page, include: advising, classifying and closing failures. The RMAN LIST FAILURE command can also display data failures and details. Failure assessments are not initiated here; they are previously executed and stored in the ADR. Failures are listed in decreasing priority order: CRITICAL, HIGH, and LOW. Failures with the same priority are listed in increasing timestamp order.
Oracle Database 11g: New Features for Administrators 13 - 9
Listing of Data Failures
RMAN LIST FAILURE command lists previously executed failure assessment.
Syntax: LIST FAILURE [ ALL | CRITICAL | HIGH | LOW | CLOSED | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum[,failnum,…] ] [ DETAIL ]
Listing of Data Failures The RMAN LIST FAILURE command lists failures. If the target instance uses a recovery catalog, it can be in STARTED mode, otherwise it must be in MOUNTED mode. To learn more about the syntax: • failnum: Number of the failure to display repair options for • ALL: List failures of all priorities. • CRITICAL: List failures of CRITICAL priority and OPEN status. These failures require immediate attention, because the make the whole database unavailable (for example, a missing control file). • HIGH: List failures of HIGH priority and OPEN status. These failures make a database partly unavailable or unrecoverable; so they should be repaired quickly (for example, missing archived redo logs). • LOW - List failures of LOW priority and OPEN status. Failures of a low priority can wait, until more important failures are fixed. • CLOSED: List only closed failures. • EXCLUDE FAILURE: Exclude the specified list of failure numbers from the list. • DETAIL: List failures by expanding the consolidated failure. For example, if there are multiple block corruptions in a file, the DETAIL option lists each one of them. See the Oracle Database Backup and Recovery Reference for details on command syntax.
Oracle Database 11g: New Features for Administrators 13 - 10
Example of Listing Data Failures [oracle1@stbbv06 orcl]$ rman Recovery Manager: Release 11.1.0.3.0 - Beta on Wed Dec 20 11:22:10 2006 Copyright (c) 1982, 2006, Oracle. All rights reserved. RMAN> connect target sys/oracle@orcl connected to target database: ORCL (DBID=1137451268) using target database control file instead of recovery catalog RMAN> list failure all; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing RMAN> list failure detail; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing List of child failures for parent failure ID 5 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------8 HIGH OPEN 20-DEC-06 datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: tablespace EXAMPLE is unavailable RMAN>
Oracle Database 11g: New Features for Administrators 13 - 11
Classifying and Closing Failures
RMAN CHANGE FAILURE command: • Changing failure priority (except for CRITICAL) • Closing one or more failures Example: RMAN> change failure 5 priority low; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing Do you really want to change the above failures (enter YES or NO)? yes changed 1 failures to LOW priority 13 - 12
Classifying and Closing Failures This command is used to change failure priority or close one or more failures. Syntax: CHANGE FAILURE { ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] } [ EXCLUDE FAILURE failnum[,failnum,…] ] { PRIORITY {CRITICAL | HIGH | LOW} | CLOSE } – change status of the failure(s) to closed [ NOPROMPT ] – do not ask user for a confirmation
A failure priority can be changed only from HIGH to LOW and from LOW to HIGH. It is an error to change the priority level of CRTICAL. (One reason why you may wish to change a failure from HIGH to LOW is to avoid seeing it on the default output list of the LIST FAILURE command.) Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close a failure. This involves a re-evaluation of all other open failures, because some of them might become irrelevant as the result of the failure closure. By default, the command asks the user to confirm a requested change.
Oracle Database 11g: New Features for Administrators 13 - 12
Advising on Repair
RMAN ADVISE FAILURE command: • • • •
Displaying summary of input failure list Including warning, if new failures appeared in ADR Displaying manual check list Listing a single recommended repair option
General repair options: – No-data-loss repair – Data-loss repair Data loss
Advising on Repair The RMAN ADVISE FAILURE command displays a recommended repair option for the specified failures. If this command is executed from within Enterprise Manager, then Data Guard is presented as a repair option. (This is not the case, if the command is executed directly from the RMAN command line). The ADVISE FAILURE command prints a summary of the input failure. The command implicitly closes all open failures that are already fixed. The default behavior (when no option is used) is to advise on all the CRITICAL and HIGH priority failures that are recorded in the Automatic Diagnostic Repository. If a new failure has been recorded in the ADR since the last LIST FAILURE command, this command includes a WARNING before advising on all CRITICAL and HIGH failures. Two general repair options are implemented: no-data-loss and data-loss repairs. Syntax: ADVISE FAILURE [ ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum [,failnum,…] ]
Oracle Database 11g: New Features for Administrators 13 - 13
Advising on Repair On the Data Recovery Advisor page, click the Advise button. When the Data Recovery Advisor generates an automated repair option, it generates a script which shows you how RMAN plans to repair the failure. If you do not want the Data Recovery Advisor to automatically repair the failure, then you can use this script as a starting point for your manual repair. The OS location of the script is printed at the end of the command output. You can examine this script, customize it (if needed, and also execute it manually, if, for example, your audit trail requirements recommend such an action.
Oracle Database 11g: New Features for Administrators 13 - 14
Advising on Repair When the Data Recovery Advisor generates a manual checklist, it considers two types of failures: • Failures that require human intervention. An example is a connectivity failure, when a disk cable is not plugged in. • Failures that are repaired faster if you can undo a previous erroneous action. For example, if you renamed a datafile by error, it is faster to rename it back, rather than initiate RMAN restoration from backup. The Data Recovery Advisor displays this page as part of its "advise" process. Select the "Manual Actions Were Performed" checkbox, if you already executed a manual repair option.
Oracle Database 11g: New Features for Administrators 13 - 15
Command Line Example RMAN> advise failure; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing List of child failures for parent failure ID 5 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------8 HIGH OPEN 20-DEC-06 datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: tablespace EXAMPLE is unavailable analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=117 device type=DISK analyzing automatic repair options complete Manual Checklist ================ 1. If file /u01/app/oracle/oradata/orcl/example01.dbf was unintentionally renamed or moved, restore it. Automated Repair Options ======================== Option Strategy Repair Description ------ ------------ -----------------1 no data loss Restore and recover datafile 5. Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2979128860.hm RMAN>
Oracle Database 11g: New Features for Administrators 13 - 16
Executing Repairs This command should be used after an ADVISE FAILURE command in the same RMAN session. By default (with no option), the command uses the single, recommended repair option of the last ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE command initiates an implicit ADVISE FAILURE command. By default, you are asked to confirm the command execution, because you may be requesting substantial changes, that take time to complete. During execution of a repair, the output of the command indicates what phase of the repair is being executed. After completing the repair, the command closes the failure. You cannot run multiple concurrent repair sessions. However, concurrent REPAIR … PREVIEW sessions are allowed. • PREVIEW means: Do not execute the repair(s); instead, display the previously generated RMAN script with all repair actions and comments. • NOPROMPT: Do not ask for confirmation.
Oracle Database 11g: New Features for Administrators 13 - 17
Example of Repairing a Failure RMAN> repair failure preview; Strategy Repair script ------------ ------------no data loss /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2537574800.hm contents of repair script: # restore and recover datafile sql 'alter database datafile 5 offline'; restore check readonly datafile 5; recover datafile 5; sql 'alter database datafile 5 online'; RMAN> repair failure; Strategy Repair script ------------ ------------no data loss /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2537574800.hm contents of repair script: # restore and recover datafile sql 'alter database datafile 5 offline'; restore check readonly datafile 5; recover datafile 5; sql 'alter database datafile 5 online'; Do you really want to execute the above repair (enter YES or NO)? y executing repair script sql statement: alter database datafile 5 offline Starting restore at 21-DEC-06 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/orcl/example01.dbf channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2006_12_20/o1_mf_n nndf_BACKUP_ORCL_000004_1_2rm4v9dj_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backup set/2006_12_20/o1_mf_nnndf_BACKUP_ORCL_000004_1_2rm4v9dj_.bkp tag=BACKUP_ORCL_00 0004_122006114740 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 Finished restore at 21-DEC-06
Oracle Database 11g: New Features for Administrators 13 - 18
Example of Repairing a Failure (continued) Starting recover at 21-DEC-06 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_5_2rm50clp_.arc archived log for thread 1 with sequence 6 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_6_2rmsgwyo_.arc archived log for thread 1 with sequence 7 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_7_2rnbosby_.arc archived log for thread 1 with sequence 8 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_8_2rnyc4c5_.arc archived log for thread 1 with sequence 9 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_9_2rolp2b4_.arc archived log for thread 1 with sequence 10 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_10_2rp2gg32_.arc archived log for thread 1 with sequence 11 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_11_2rpllvqk_.arc archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o 1_mf_1_5_2rm50clp_.arc thread=1 sequence=5 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_ 12_20/o1_mf_1_6_2rmsgwyo_.arc thread=1 sequence=6 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o 1_mf_1_7_2rnbosby_.arc thread=1 sequence=7 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o 1_mf_1_8_2rnyc4c5_.arc thread=1 sequence=8 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o 1_mf_1_9_2rolp2b4_.arc thread=1 sequence=9 media recovery complete, elapsed time: 00:00:01 Finished recover at 21-DEC-06 sql statement: alter database datafile 5 online repair failure complete RMAN>
Oracle Database 11g: New Features for Administrators 13 - 19
Executing Repairs In Enterprise Manager, the Data Recovery Advisor leads you to this page. The job scheduler initiates the execution of the RMAN repair script.
Oracle Database 11g: New Features for Administrators 13 - 20
Executing Repairs The Data Recovery Advisor displays this page. In the preceding example, a successful repair is completed.
Oracle Database 11g: New Features for Administrators 13 - 21
Data Recovery Advisor Views
Querying dynamic data dictionary views: • V$IR_FAILURE: Listing all failures, including closed ones (result of the LIST FAILURE command) • V$IR_MANUAL_CHECKLIST: Listing of manual advice (result of the ADVISE FAILURE command) • V$IR_REPAIR: Listing of repairs (result of the ADVISE FAILURE command)
Data Recovery Advisor Views See the Oracle Database Reference for details on the dynamic data dictionary views that the Data Recovery Advisor uses.
Oracle Database 11g: New Features for Administrators 13 - 22
Best Practice: Proactive Checks
Invoking proactive health check of the database and its components: • Health Monitor or RMAN VALIDATE DATABASE command • Checking for logical and physical corruption • Findings logged in ADR
Best Practice: Proactive Checks For very important databases, you may want to execute additional proactive checks (possibly daily during low peak interval periods). You can schedule periodic health checks through Health Monitor or by using the RMAN VALIDATE command. In general, when a reactive check detects failure(s) in a database component, you may want to execute a more complete check of the affected component. The RMAN VALIDATE DATABASE command is used to invoke health checks for the database and its components. It extends the existing VALIDATE BACKUPSET command. Any problem detected during validation is displayed to you. Problems initiate the execution of a failure assessment. If a failure is detected, it is logged into the Automated Diagnostic Repository (ADR) as a finding. You can use the LIST FAILURE command to view all failures recorded in the repository. The VALIDATE command supports validation of individual backup sets and data blocks. In a physical corruption, the database does not recognize the block at all. In a logical corruption, the contents of the block are logically inconsistent. By default, the VALIDATE command checks for physical corruption only. You can specify CHECK LOGICAL to check for logical corruption as well.
Oracle Database 11g: New Features for Administrators 13 - 23
Best Practice: Proactive Checks (continued) Block corruptions can be divided into interblock corruption and intrablock corruption. In intrablock corruption, the corruption occurs within the block itself and can be either physical or logical corruption. In interblock corruption, the corruption occurs between blocks and can only be logical corruption. The VALIDATE command checks for intrablock corruptions only. Example: RMAN> validate database; Starting validate at 21-DEC-06 using channel ORA_DISK_1 channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: validation complete, elapsed time: 00:00:15 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------1
channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation including current control file for validation including current SPFILE in backup set channel ORA_DISK_1: validation complete, elapsed time: 00:00:01 List of Control File and SPFILE =============================== File Type
Setting Corruption-Detection Parameters You can use the DB_ULTRA_SAFE parameter for easy manageability. It affects the default values of the following parameters: • DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often prevent memory and data corruption. (Default: FALSE, recommended: FULL) • DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache header of every data block when writing it to disk. Checksums assist in detecting corruption caused by underlying disks, storage systems or I/O systems. (Default: TYPICAL, recommended: TYPICAL) • DB_LOST_WRITE_PROTECT, which initiates checking for "lost writes". Data block lost writes occur on a physical standby database, when the I/O subsystem signals the completion of a block write, which has not yet been completely written in persistent storage. Of course, the write operation has been completed on the primary database. (Default: TYPICAL, recommended: TYPICAL) If you set any of these parameters explicitly, then your values remain in effect. The DB_ULTRA_SAFE parameter changes only the default values for these parameters.
Oracle Database 11g: New Features for Administrators 13 - 26
Setting Corruption-Detection Parameters (continued) Depending on your system's tolerance for block corruption, you can intensify the checking for block corruption. Enabling the DB_ULTRA_SAFE parameter (default: OFF) results in increased system overhead, because of these more intensive checks. The amount of overhead is related to the number of blocks changed per second; so it cannot be easily quantified. For a 'high-update' application, you can expect a significant increase in CPU, likely in the ten to twenty percent range, but possibly higher. This overhead can be alleviated by allocating additional CPUs.
Oracle Database 11g: New Features for Administrators 13 - 27
Summary
In this lesson, you should have learned how to: • Describe your options for repairing data failure • Use the new RMAN data repair commands: – List failures – Receive repair advice – Repair failure
• Perform proactive failure checks • Query the Data Recovery Advisor views
Oracle Database 11g: New Features for Administrators 14 - 1
Objectives
After completing this lesson, you should be able to: • Configure the password file to use case sensitive passwords • Encrypt a tablespace • Create a virtual private catalog for RMAN • Configure fined grained access to network services
Oracle Database 11g: New Features for Administrators 14 - 2
Secure Password Support
More Secure Password Support. Passwords • May be longer (up to 50 characters) • Are case sensitive • Contain more characters • Use more secure hash algorithm • Use salt in the hash algorithm Usernames are still Oracle identifiers (up to 30 characters, case insensitive)
Secure Password Support You must use more secure passwords to meet the demands of compliance to various security and privacy regulations. Passwords that very short and passwords that are formed from a limited set of characters are susceptible to brute force attacks. Longer passwords with more different characters allowed make the password much more difficult to guess or find. In Oracle Database 11g, the password is is handled differently than in previous versions; • Passwords may be longer. 50 character passwords are allowed. • Passwords are case sensitive. Upper and lower case characters are now different characters when used in a password. • Passwords may contain special characters, and multibyte characters. In previous versions of the database only the ‘$’,’_’, and ‘#’ special characters were allowed in the password without quoting the password. • Passwords are always passed through a hash algorithm, then stored as a user credential. When the user presents a password, it is hashed then compared to the stored credential. In Oracle Database 11g the hash algorithm is SHA-1 of the public algorithm used in previous versions of the database. SHA-1 is a stronger algorithm using a 160 bit key. • Passwords always use salt. A hash function always produces the same output, given the same input. Salt is a unique (random) value that is added to the input, to insure the output credential in unique.
Oracle Database 11g: New Features for Administrators 14 - 3
Automatic Secure Configuration Oracle Database 11g installs and creates the database with certain security features recommended by the CIS (Centre for Internet Security) benchmark. The CIS recommended configuration is more secure than the 10gR2 default installation; yet open enough to allow the majority of applications to be successful. Many customers have adopted this benchmark already. There are some recommendations of the CIS benchmark that may be incompatible with some applications.
Oracle Database 11g: New Features for Administrators 14 - 4
Password Configuration
By default: • Default password profile is enabled • Account is locked after 10 failed login attempts In upgrade: • Passwords are case insensitive until changed • Passwords become case sensitive by ALTER USER On creation: • Passwords are case sensitive
Secure Default Configuration When creating a custom database using the Database Configuration Assistant (DBCA), you can specify the Oracle Database 11g default security configuration. By default, If a user tries to log into an Oracle Database multiple times using an incorrect password, Oracle Database delays each login after the third try. This protection applies for attempts made from different IP addresses or multiple client connections. Afterwards, it gradually increases the time before the user can try another password, up to a maximum of about ten seconds. The default password profile is enabled with the settings: PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 7 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED FAILED_LOGIN_ATTEMPTS 10 PASSWORD_LOCK_TIME 1 PASSWORD_VERIFY_FUNCTION NULL
When an Oracle Database 10g is upgraded, passwords are case insensitive until the ALTER USER… command is used to change the password. When the database is created, the passwords will case sensitive by default.
Oracle Database 11g: New Features for Administrators 14 - 5
Enable Built-in Password Complexity Checker
Execute the utlpwdmg.sql script to create the password verify function: SQL> CONNECT / as SYSDBA SQL> @?/rdbms/admin/utlpwdmg.sql
Alter the default profile: ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION verify_function_11g;
Enable Built-in Password Complexity Checker The verify_function_11g is a sample PL/SQL function that can be easily modified to enforce the password complexity policies at your site. This function does not require special characters to be embedded in the password. Both the verify_function_11g and the older verify_function are included in the utlpwdmg.sql file. To enable the password complexity checking, create a verification function owned by SYS. Use on of the supplied functions or modify one of them to meet your requirements. The example show using the utlpwdmg.sql script. With no modification, the script creates the verify_function_11g. The verify_function11g function checks that the password: contains at least 8 characters, contains at least one number and one alphabetic character, and differs from the previous password by at least 3 characters. The function also checks that the password is not: a username or username appended with an number 1 to 100, a username reversed, a server name or server name appended with 1-100, or one of a set of well know and common passwords such as 'welcome1', 'database1', 'oracle123', or oracle(appended with 1-100), etc
Oracle Database 11g: New Features for Administrators 14 - 6
Managing Default Audits
Review Audit logs: • Default audit options cover important security privileges Archive Audit records • Export • Copy to another table Remove archived audit records
Managing Default Audits Review the audit logs. By default, auditing is enabled in Oracle Database 11g for certain privileges that are very important to security. The audit trail is recorded in the database AUD$ table by default; the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database performance, for most sites. Archive audit records. To retain audit records export using Datapump export, or use the SELECT statement to capture a set of audit records into a separate table. Remove archived audit records. Remove audit records from the SYS.AUD$ table after review and archive. Audit records take up space in the SYSTEM tablespace. If the SYSTEM tablespace cannot grow, and there is not more space for audit records errors will be generated for each audited statement. Since CREATE SESSION is one of the audited privileges, no new sessions may be created. Note: the system tablespace is created with the autoextend on option. So the SYSTEM tablespace will grow as needed until there is no more space available on the disk.
Oracle Database 11g: New Features for Administrators 14 - 7
Managing Default Audits The following privileges are audited for all users on success and failure, and by access: CREATE EXTERNAL JOB CREATE ANY JOB GRANT ANY OBJECT PRIVILEGE EXEMPT ACCESS POLICY CREATE ANY LIBRARY GRANT ANY PRIVILEGE DROP PROFILE ALTER PROFILE DROP ANY PROCEDURE ALTER ANY PROCEDURE CREATE ANY PROCEDURE ALTER DATABASE GRANT ANY ROLE CREATE PUBLIC DATABASE LINK DROP ANY TABLE ALTER ANY TABLE CREATE ANY TABLE DROP USER ALTER USER CREATE USER CREATE SESSION AUDIT SYSTEM ALTER SYSTEM
Oracle Database 11g: New Features for Administrators 14 - 8
Adjust Security Settings When you create a database using the DBCA tool, you are offered a choice of security settings: • Keep the enhanced 11g default security settings (recommended). These settings include enabling auditing and new default password profile. • Revert to pre-11g default security settings. To disable a particular category of enhanced settings for compatibility purposes choose from the following: - Revert audit settings to pre-11g defaults - Revert password profile settings to pre-11g defaults. These settings can also be changed after the database is created using DBCA. Secure permissions on software are always set. It is not impacted by user’s choice for ‘Security Settings’ option.
Oracle Database 11g: New Features for Administrators 14 - 9
Setting Security Parameters
Restrict release of server information • SEC_RETURN_SERVER_RELEASE Protect against DoS attacks • SEC_PROTOCOL_ERROR_FURTHER_ACTION • SEC_PROTOCOL_ERROR_TRACE_ACTION Protect against old protocols attacks • SEC_DISABLE_OLDER_ORACLE_RPCS Protect against brute force attacks • SEC_MAX_FAILED_LOGIN_ATTEMPTS
Setting Security Parameters A set of new parameters have been added to the Oracle Database 11g to enhance the default security of the database. These parameters are system wide and static. Restrict release of server information A new parameter SEC_RETURN_SERVER_RELEASE reduces the amount of information about the server that is available to the client. When set to true the full banner is displayed. When the value is set to FALSE, a limited generic banner is displayed. (doesn’t work yet in 11.1.0.4 beta) Protect against denial of Service (DoS) attacks The two parameters shown specify the actions to be taken when the database receives bad packets from a client. The assumption is that the bad packets are from a possible malicious client. The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with the client connection: Continue, drop the connection, or delay accepting requests. The other parameter SEC_PROTOCOL_ERROR_TRACE_ACTION specifies a monitoring action: NONE, TRACE, LOG, or ALERT. Protect against old protocols attacks Older protocols that are not as secure are a vector for attacks. If these older protocols are being used by applications in your database disable these protocols with the SEC_DISABLE_OLDER_ORACLE_RPCS parameter set to TRUE. Protect against brute force attacks A new initialization parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS that has a default setting ofOracle 10 causes a connection to be automatically after the specified14 number Database 11g: New Featuresdropped for Administrators - 10 of attempts. This parameter is enforced even when the password profile is not enabled.
Setting Database Administrator Authentication
Use password file with case sensitive passwords Enable Strong authentication for administrator roles • Grant Administrator ROLE in the Directory • Use Kerberos tickets • Use Certificates with SSL
Setting Database Administrator Authentication The database administrator must always be authenticated. In Oracle Database 11g there are a new methods make administrator authentication more secure and centralize the administration of these privileged users Use case sensitive passwords with a password file for remote connections. orapwd file=orapworcl entries=5 ignorecase=N
If your concern is that the password file might be vulnerable or that the maintenance of many password files is a burden, then strong authentication can be implemented: • Grant OSDBA, or OSOPER roles in the Oracle Internet Directory. • Use Kerberos tickets • Use certificates over SSL To use any of the strong authentication methods the LDAP_DIRECTORY_SYSAUTH initialization parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication methods. Authentication through Oracle Internet Directory or through Kerberos also can provide centralized administration or single sign-on. If the password file is configured, it will be checked first. The user may also be authenticated by the local OS by being a member of the OSDBA or OSOPER groups.
Oracle Database 11g: New Features for Administrators 14 - 11
Setup Directory Authentication for Administrative Users 1. Create the user in the directory 2. Grant SYSDBA or SYSOPER role to user 3. Set LDAP_DIRECTORY_SYSAUTH parameter in database 4. Check LDAP_DIRECTORY_ACCESS parameter is set to PASSWORD or SSL. 5. Test the connection $sqlplus fred/t%3eEGQ@orcl AS SYSDBA
Setup Directory Authentication for Administrative Users To enable the Oracle Internet Directory (OID) server to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. 2. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 3. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 4. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. The possible values are PASSWORD or SSL. 5. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, for Fred to log on as SYSDBA if the net service name is orcl: CONNECT fred/t%3eEGQ@orcl AS SYSDBA
Note: If the database is configured to use a password file for remote authentication, the password file will be checked first.
Oracle Database 11g: New Features for Administrators 14 - 12
Setup Kerberos Authentication for Administrative Users 1. Create the user in the Kerberos domain 2. Configure OID for Kerberos authentication 3. Grant SYSDBA or SYSOPER role to user in OID 4. Set LDAP_DIRECTORY_SYSAUTH parameter in database 5. Set LDAP_DIRECTORY_ACCESS parameter 6. Test the connection $sqlplus /@orcl AS SYSDBA
Setup Kerberos Authentication for Administrative Users To enable Kerberos to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. For more information on configuring Kerberos authentication, see the Oracle Database Advanced Security Administrator’s Guide 11g. 2. Configure OID for Kerberos authentication. See Oracle Database Enterprise User Administrator's Guide 11g Release 1 3. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 4. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 5. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. This will be set to either PASSWORD or SSL 6. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log on as SYSDBA if the net service name is orcl: CONNECT /@orcl AS SYSDBA
Oracle Database 11g: New Features for Administrators 14 - 13
Setup SSL Authentication for Administrative Users 1. Configure client to use SSL 2. Configure server to use SSL 3. Configure OID for SSL user authentication 4. Grant SYSOPER or SYSDDBA to the user 5. Set LDAP_DIRECTORY_SYSAUTH parameter in database 6. Test the connection $sqlplus /@orcl AS SYSDBA
Setup SSL Authentication for Administrative Users To enable SYSDBA and SYSOPER connections using certificates and SSL (for more information on configuring SSL authentication see the Oracle Database Advanced Security Administrator’s Guide 11g.) : 1. Configure the client to use SSL • Setup client wallet and user certificate. Update wallet location in sqlnet.ora. • Configure Oracle net service name to include server distinguished names and use TCP/IP with SSL in tnsnames.ora. • Configure TCP/IP with SSL in listener.ora. • Set the client SSL cipher suites and required SSL version, set SSL as an authentication service in sqlnet.ora. 2. Configure the server to use SSL: • Enable SSL for your database listener on TCPS and provide a corresponding TNS name. • Stored your database PKI credentials in the database wallet. • Set the LDAP_DIRECTORY_ACCESS initialization parameter to SSL 3. Configure OID for SSL user authentication. See Oracle Database Enterprise User Administrator's Guide 11g Release 1. 4. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 5. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method.
Oracle Database 11g: New Features for Administrators 14 - 14
6. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log on as SYSDBA if the net service name is orcl:
Transparent Data Encryption
Support for Log Miner Support for Logical Standby Tablespace Encryption Hardware based Master key protection
Transparent Data Encryption Several new features enhance the capabilities of Transparent Data Encryption, and build on the same infrastructure.
Oracle Database 11g: New Features for Administrators 14 - 15
TDE and Log Miner
Log Miner supports Transparent Data Encryption encrypted columns. Restrictions: • The wallet holding the TDE master keys must be open • Hardware Security Modules are not supported • User Held Keys are not supported
TDE and Log Miner With Transparent Data Encryption (TDE), the encrypted column data is encrypted in the data files, the undo segments and the redo logs. Oracle Logical Standby depends on the log miner ability to transform redo logs into SQL statements for SQL Apply. Log Miner has been enhanced to support TDE. This enhancement provides the ability to support TDE on a logical standby database. The wallet containing the master keys for TDE must be open for Log Miner to decrypt the encrypted columns. The database instance must be mounted to open the wallet, therefore Log Miner cannot populate V$LOGMNR_CONTENTS to support TDE if the database instance is not mounted.. Log Miner populates V$LOGMNR_CONTENTS for tables with encrypted columns, displaying the column data unencrypted for rows involved in DML statements. Note that this is not a security violation: TDE is a file-level encryption feature and not an access control feature. It does not prohibit DBAs from looking at encrypted data. At Oracle Database 11g, Log Miner does not support TDE with hardware support module (HSM) for key storage. User held keys for TDE are PKI public and private keys supplied by the user for TDE master keys. User held keys are not supported by Log Miner.
Oracle Database 11g: New Features for Administrators 14 - 16
TDE and Logical Standby
Logical Standby database with TDE: • Wallet on the standby is a copy of the wallet on the primary • Master key may be changed only on the primary • Wallet open and close commands are not replicated • Table key may be changed on the standby • Table encryption algorithm may be changed on the standby
TDE and Logical Standby The same wallet is required for both databases. The wallet must be copied from the primary database to the standby database every time the master key has been changed using the "alter system set encryption key identified by <wallet_password>“. An error is raised if the DBA attempts to change the master key on the standby database. If auto-login wallet is not used. The wallet must opened on the standby. Wallet open and close commands are not replicated on standby. A different password can be used to open the wallet on the standby. The wallet owner can change the password to be used for the copy of the wallet on the standby. The DBA will have the ability to change the encryption key or the encryption algorithm of a replicated table at the logical standby This does not require a change to the master key or wallet.. This operation is performed with: ALTER TABLE table_name REKEY USING '3DES168';
There can be only one algorithm per table. Changing the algorithm at the table changes the algorithm for all the columns. A column on the standby can have a different algorithm than the primary or no encryption. To change the table key the guard setting must be lowered to NONE. TDE can be used on local tables in the logical standby independently of the primary, if encrypted columns are not replicated into the standby.
Oracle Database 11g: New Features for Administrators 14 - 17
Using Tablespace Encryption
Create an encrypted tablespace 1. Create or open the encryption wallet SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1";
2. Create a tablespace with the encryption keywords SQL> 2> 3> 4>
14 - 18
CREATE TABLESPACE encrypt_ts DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M ENCRYPTION USING '3DES168' DEFAULT STORAGE (ENCRYPT);
Tablespace Encryption Tablespace encryption is based on block level encryption that encrypts on write and decrypts on read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The SQL access paths are unchanged and all data types are supported. To use tablespace encryption the encryption wallet must be open. The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the properties in the V$ENCRYPTED_TABLESPACES view. The encrypted data is protected during operations like JOIN and SORT. This means that the data is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected. Restrictions: • Temporary and undo tablespaces cannot be encrypted. (selected blocks are encrypted) • Bfiles and external tables are not encrypted. • Transportable tablespaces across different endian platforms is not supported. • The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators 14 - 18
Hardware Security Module
Encrypt and decrypt operations are performed on the hardware security module
Hardware Security Module A hardware security module (HSM) is a physical device that provides secure storage for encryption keys. It also provides secure computational space (memory) to perform encryption and decryption operations. HSM is a more secure alternative to the Oracle wallet. Transparent data encryption can use HSM to provide enhanced security for sensitive data. An HSM is used to store the master encryption key used for transparent data encryption. The key is secure from unauthorized access attempts as the HSM is a physical device and not an operating system file. All encryption and decryption operations that use the master encryption key are performed inside the HSM. This means that the master encryption key is never exposed in insecure memory. There are several vendors that provide Hardware Security Modules. The vendor must supply the appropriate libraries.
Oracle Database 11g: New Features for Administrators 14 - 19
Using a Hardware Security Module with TDE 1. Decrypt encrypted data before switching to HSM 2. Configure sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM))
3. Copy the PKCS#11 library to the correct path 4. Set up the HSM 5. Generate a master encryption key for HSM-based encryption ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password
Beta Only Using HSM involves an initial setup of the HSM device. You also need to configure transparent data encryption to use HSM. Once the initial setup is done, HSM can be used just like an Oracle software wallet. The following steps discuss configuring and using hardware security modules: • Decrypt Encrypted Data Before Switching to HSM • Set the ENCRYPTION_WALLET_LOCATION Parameter in sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM))
• • •
Copy the PKCS#11 Library to It's Correct Path Set Up the HSM Generate a Master Encryption Key for HSM-Based Encryption
•
Ensure that the HSM Is Accessible
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password
Oracle Database 11g: New Features for Administrators 14 - 20
Encryption for LOB Columns
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128') LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING );
• LOB encryption is allowed only for SECUREFILE LOBS • All LOBs in the LOB column are encrypted • LOBs can be encrypted on per-column or per-partition basis – Allows for the co-existence of SECUREFILE and BASICFILE LOBs
Encryption for LOB Columns Oracle Database 11g introduces a completely reengineered large object (LOB) data type that dramatically improves performance, manageability, and ease of application development. This SecureFiles implementation (of LOBs) offers advanced, next-generation functionality such as intelligent compression and transparent encryption. The encrypted data in SecureFiles is stored inplace and is available for random reads and writes. You must create the LOB with the SECUREFILE parameter, with encryption enabled(ENCRYPT) or disabled(DECRYPT—the default) on the LOB column. The current TDE syntax is used for extending encryption to LOB data types. LOB implementation from prior versions is still supported for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as SecureFiles or BasicFiles. The default LOB type is BasicFiles to ensure backward compatibility. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192. Note: For further discussion on SecureFiles, please see the “ Managing Storage” lesson.
Oracle Database 11g: New Features for Administrators 14 - 21
Using Kerberos Enhancements
• Use stronger encryption algorithms (no action required) • Interoperability between MS KDC and MIT KDC (no Action required) • Longer principal name CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';
Kerberos Enhancements The Oracle client Kerberos implementation now makes use of secure encryption algorithms like 3DES and AES in place of DES. This makes using Kerberos more secure. The Kerberos authentication mechanism in Oracle Database now supports the following encryption types: • DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum) • RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum) • AES128-CTS (AES algorithm with 128-bit key in CTS mode with HMAC-SHA1 as checksum) • AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum) The Kerberos implementation has been enhanced to interoperate smoothly with Microsoft and MIT Key Distribution Centers. The Kerberos principal name can now contain more than 30 characters. It is no longer restricted by the number of characters allowed in a database user name. If the Kerberos principal name is longer than 30 characters use: CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';
Oracle Database 11g: New Features for Administrators 14 - 22
Managing TDE with Enterprise Manager The administrator using Enterprise Manager can open and close the wallet, move the location of the wallet and generate a new master key. The example shows that TDE options are part of the Create or Edit Table processes. Table encryption options allow you to choose the encryption algorithm and salt. The table key can also be reset. The other place where TDE changed the management pages is Export and Import Data. If TDE is configured, the wallet is open, and the table to exported has encrypted columns, the export wizard will offer data encryption. The same arbitrary key(password) that was used on export must be provided both on import in order to import any encrypted columns. A partial import that does not include tables that contain encrypted columns does not require the password.
Oracle Database 11g: New Features for Administrators 14 - 23
Managing Tablespace Encryption with Enterprise Manager
Managing Tablespace Encryption with Enterprise Manager You can manage tablespace encryption from the same console as you manage Transparent Database Encryption. Once encryption has been enabled for the database, the DBA can set the encryption property of a tablespace on the Edit Tablespace page or create
Oracle Database 11g: New Features for Administrators 14 - 24
Managing Virtual Private Database With Enterprise Manager 11g you can now manage the Virtual Private Database policies from the console. You can enable, disable, add, and drop polices. The console also allows you to manage application contexts. The application context page is not shown.
Oracle Database 11g: New Features for Administrators 14 - 25
Managing Label Security with Database Control Oracle Label Security (OLS) Management is integrated with Enterprise Manager Database Control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners and host. The differences between database control and grid control are minimal. Oracle Label Security (OLS) Management is integrated with Enterprise Manager Grid control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners and other targets.
Oracle Database 11g: New Features for Administrators 14 - 26
Managing Label Security with Oracle Internet Directory
Label Security with OID Oracle Label Security policies can now be created and stored in the Oracle Internet Directory using Enterprise Manager, then propagated to one or more databases. A database will subscribe to a policy making the policy available to the database, and the policy can be applied to tables and schemas in the database. Label authorizations can be assigned to enterpriser users in the form of profiles.
Oracle Database 11g: New Features for Administrators 14 - 27
Enterprise Users / Enterprise Manager The functionality of the Enterprise Security Manager has been integrated into Enterprise Manager. Enterpriser manager allows you to create and configure enterprise domains, enterprise roles, user schema mappings and proxy permissions. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Enterprise Users and groups can also be configured for enterprise user security. The creation of enterprise users and groups can be done through Delegated Administration Service (DAS). Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterpriser manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
Oracle Database 11g: New Features for Administrators 14 - 28
Enterprise Manager Security Management Security management has been integrated into Enterprise Manager. Oracle Label Security, Application Contexts, and Virtual Private Database previous administered through Oracle Policy Manager tool are managed through the Enterprise Manager. Enterprise User Security is also now managed though Enterprise Manager instead of a separate tool. A graphical interface for managing Transparent Data Encryption has been added.
Oracle Database 11g: New Features for Administrators 14 - 29
Enterprise Manager Policy Manager Screen shot of policy manager
Enterprise Manager Policy Manager Enterprise Manager Policy manager allows you to compare your database configuration against a set of Oracle best practices. The Oracle best practices are in line with CIS and PCI requirements (CHECK before release possible better wording). For reviewers: Can the recommendations that are being used as a baseline be changed to match PCI or CIS recommendations?
Oracle Database 11g: New Features for Administrators 14 - 30
Managing Label Security with Oracle Internet Directory
Label Security with OID Oracle Label Security policies can now be created and stored in the Oracle Internet Directory, then applied to one or more databases. A database will subscribe to a policy making the policy available to the database, and the policy can be applied to tables and schemas in the database. Label authorizations can be assigned to enterpriser users in the form of profiles.
Oracle Database 11g: New Features for Administrators 14 - 31
Enterprise Users / Enterprise Manager The functionality of the Enterprise Security Manager has been integrated into Enterpriser Manager. Enterprise Users can be created and configured. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterpriser manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
Oracle Database 11g: New Features for Administrators 14 - 32
Oracle Audit Vault Enhancements
Harden Streams (configuration?) DML/DDL capture on SYS schema ** Capture actions against SYS, SYSTEM, and CTXSYS schema *** maybe TMI *** Capture changes to SYS.AUD$ and SYS.FGA_LOG$
Oracle Audit Vault Enhancements Oracle Audit Vault provides auditing in a heterogeneous environment. Audit Vault consists of a secure database to store and analyze audit information from various sources such as databases, OS audit trails etc. Oracle Streams is an asynchronous information sharing infrastructure that facilitates sharing of events within a database or from one database to another. Events could be DML or DDL changes happening in a database. These events are captured by Streams implicit capture and are propagated to a queue in a remote database where they are consumed by a subscriber which is typically the Streams apply process. Oracle Streams can already capture all DML on participating tables and all DDL to the database. Streams is enhanced to capture the events that change the database audit trail, forwarding that information to Audit Vault. Harden the transfer and collect configuration. The configuration of audit vault is driven entirely from the Audit Vault instance. Audit sources will require only an initial configuration to enable. (Is this what is intended? JLS interpretation of FS)
Oracle Database 11g: New Features for Administrators 14 - 33
RMAN Security Enhancements Backup shredding is a key management feature, that allows the DBA to delete the encryption key of transparent encrypted backups, without physical access to the backup media. The encrypted backups are rendered in accessible if the encryption key is destroyed. This does not apply to password protected backups. Configure Backup Shredding with: CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; Or SET ENCRYPTION EXTERNAL KEY STORAGE ON;
The default setting is OFF, and backup shredding is not enabled. To shred a backup no new command is needed, use: DELETE FORCE;
Virtual Private Catalog. The RMAN catalog has been enhanced create virtual private RMAN catalogs for groups of databases and users. The catalog owner creates the base catalog, and grants RECOVERY_CATALOG_OWNER to the owner of the virtual catalog. The catalog owner can either grant access to registered database to the virtual catalog owner or grant REGISTER to the virtual catalog owner. The virtual catalog owner can then connect to the catalog for a particular target or register a target database. Once the virtual private catalog is configured the virtual private catalog owner uses it just like a standard base catalog. This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities. The catalog ownerDatabase can access all the New registered databasefor information in the catalog. Oracle 11g: Features Administrators 14 -The 34 catalog owner can see a listing of all databases registered with the SQL*Plus command: SELECT DISTINCT db_name FROM DBINC;
Using RMAN Virtual Private Catalog
1. Create an RMAN base catalog RMAN> CONNECT CATALOG catowner/oracle@catdb; RMAN> CREATE CATALOG;
2. Grant RECOVERY_CATALOG_OWNER to VPC owner SQL> CONNECT SYS/oracle@catdb AS SYSDBA SQL> GRANT RECOVERY_CATALOG_OWNER to vpcowner
3. Grant REGISTER to the VPC owner or RMAN> CONNECT CATALOG catowner/oracle@catdb; RAMN> GRANT REGISTER DATABASE TO vpcowner;
Grant CATALOG FOR DATABASE to the VPC owner RMAN>GRANT CATALOG FOR DATABASE db10g TO vpcowner
Using RMAN Virtual Private Catalog The RMAN catalog has been enhanced. You create virtual private RMAN catalogs for groups of databases and users. 1. The catalog owner creates the base catalog. 2. The DBA on the catalog database creates the user that will own the virtual private catalog and grants RECOVERY_CATALOG_OWNER to the owner of the virtual catalog. 3. The catalog owner can grant access for previously registered databases to the virtual catalog owner or grant REGISTER to the virtual catalog owner. The GRANT CATALOG command is: GRANT CATALOG FOR DATABASE prod1, prod2 TO vpcowner;
The GRANT REGISTER command is: GRANT REGISTER DATABASE TO vpcowner;
The virtual catalog owner can then connect to the catalog for a particular target or register a target database. Once the virtual private catalog is configured the virtual private catalog owner uses it just like a standard base catalog.
Oracle Database 11g: New Features for Administrators 14 - 35
Using RMAN Virtual Private Catalog (cont)
4. Create a virtual catalog for 11g clients or RMAN> CONNECT CATALOG vpcowner/oracle@catdb; RMAN> CREATE VIRTUAL CATALOG;
Create a virtual catalog for pre-11g clients SQL> CONNECT vpcowner/oracle@catdb SQL> exec catowner.dbms_rcvcat.create_virtual_catalog;
4. REGISTER a not previously cataloged database RMAN> CONNECT TARGET / CATALOG vpcowner/oracle@catdb; RAMN> REGISTER DATABASE;
5. Use the virtual catalog RMAN> CONNECT TARGET / CATALOG vpcowner/oracle@catdb; RAMN> BACKUP DATABASE; 14 - 36
Using RMAN Virtual Private Catalog (cont) 4. Create a virtual private catalog. • If the target database is an Oracle Database 11g and the RMAN client is an 11g client. You can use the RMAN command: CREATE VIRTUAL CATALOG;
• If the target database is Oracle Database 10g Release 2 or earlier, using a compatible client. You must execute the supplied procedure from SQL*Plus: base_catalog_owner.dbms_rcvcat.create_virtual_catalog;
5. Connect to the catalog using the VPC owner login, and use it as a normal catalog. This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities. The catalog owner can access all the registered database information in the catalog. The catalog owner can see a listing of all databases registered with the SQL*Plus command: SELECT DISTINCT db_name FROM DBINC;
The virtual catalog owner can only see the databases that have been granted. If the catalog owner has not been granted SYSDBA or SYSOPER on the target database, then most RMAN operations cannot be performed by catalog owner.
Oracle Database 11g: New Features for Administrators 14 - 36
Managing Fine-Grained Access to External Network Services 1. Create an ACL and its privileges BEGIN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'us-oracle-com-permissions.xml', description => ‘Permissions for oracle network', principal => ‘SCOTT', is_grant => TRUE, privilege => 'connect'); END;
Managing Fine-Grained Access to External Network Services The network utility family of PL/SQL packages such as UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL allow Oracle users to make network callouts from the database using raw TCP or using higher level protocols built on raw TCP. A user either did or did not have EXECUTE privilege on these packages and there was no control over which network hosts were accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using access control lists (ACL) implemented by XML DB. The first step is to create an access control list (ACL). The ACL is a list of users and privileges held in an XML file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in the XML DB. In the example, SCOTT is granted connect. The username is case sensitive in the ACL and must match the username of the session. There are only resolve and connect privileges. The connect privilege implies resolve. Optional parameters can specify a start and end timestamp for these privileges. To add more users and privileges to this ACL use the ADD_PRIVILEGE procedure.
Oracle Database 11g: New Features for Administrators 14 - 37
Managing Fine-Grained Access to External Network Services 2. Assign an ACL to one or more network hosts BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => ‘us-oracle-com-permissions.xml', host => ‘*.us.oracle.com', lower_port => 80, upper_port => null); END
Managing Fine-Grained Access to External Network Services Assign an ACL to one or more network hosts. The ASSIGN_ACL procedure associates the ACL with a network host and optionally a port or range of ports. In the example, the host parameter allows wild card character for the host name to assign the ACL to all the hosts of a domain. The use of wild cards affect the order of precedence for the evaluation of the ACL. Fully qualified host names with ports are evaluated before hosts with ports. Fully qualified host names are evaluated before partial domain names, and sub-domains are evaluated before the top level domain level. Multiple hosts can be assigned to the same ACL and multiple users can be added to the same ACL in any order after the ACL has been created.
Oracle Database 11g: New Features for Administrators 14 - 38
Summary
In this lesson, you should have learned how to: • Configure the password file to use case sensitive passwords • Encrypt a tablespace • Create a virtual private catalog for RMAN • Configure fined grained access to network services
Summary A summary list appears at the end of each course, unit, module, and lesson. You can format the summary slide in two ways. For example, you can summarize the lesson or unit in a short paragraph, or you can simply restate the objectives. Whichever format you choose, use it consistently for every lesson and unit in your course. If you decide to simply restate the objectives, try not to repeat them verbatim. Use the following guidelines for the bulleted list: • Begin the summary list with this introduction: “In this lesson, you should have learned how to:” • Under this introduction, create list items that are sentence fragments beginning with imperative (action) verbs. Do not use end punctuation. • If the summary covers only one topic, incorporate that topic in the “In this lesson…” sentence. Do not create a one-bullet list. For example: In this lesson, you should have learned how to define a parameter. (Note the end punctuation.) not In this lesson, you should have learned how to: - Define a parameter
Oracle Database 11g: New Features for Administrators 14 - 39
Practice # Overview: Using Security Features This practice covers the following topics: • Configuring the password file to use case sensitive passwords • Encrypting a tablespace • Creating and using a virtual private catalog • Performing RMAN operations as SYSOPER
Note: Insert practice number (#) in the slide title following the word Practice.
SecureFiles Overview This feature introduces a completely reengineered large object (LOB) data type to dramatically improve performance, manageability, and ease of application development. The new implementation also offers advanced, next-generation functionality such as intelligent compression and transparent encryption. This feature significantly strengthens the native content management capabilities of Oracle Database: SecureFiles: API The enhanced SecureFiles LOB APIs provide seamless access to both SecureFiles and BasicFile LOBs. With this feature, you need only to learn one set of APIs irrespective of the LOB implementation they are using. The benefit of this feature is greater ease-of-use. The new APIs are an extension of the old APIs, so no relearning is required.
Oracle Database 11g: New Features for Administrators 15 - 3
SecureFiles Overview (Continued) SecureFiles: Compression This feature allows you to explicitly compress SecureFiles to gain disk, I/O, and redo logging savings. The benefits of this feature are: • Reduced costs due to the most efficient utilization of space. • Improved performance of SecureFiles as compression reduces I/O and redo logging (at some CPU expense). As CPUs get faster, and as clustered computing gets more ubiquitous, it is safer to err on the side of more CPU as long as it saves I/O and redo logging to disk. SecureFiles: Data Path Optimization This feature entails a number of performance optimizations for SecureFiles, including: • Dynamic use of CACHE and NOCACHE, to avoid polluting the buffer cache for large cache SecureFiles LOBs • SYNC and ASYNC to take advantage of the COMMIT NOWAIT BATCH semantics of transaction durability • Write Gather Caching, similar to dirty write caches of file servers. This write gathering amortizes the cost of space allocation, inode updates, and redo logging, and enables large I/O's to disk. • New DLM locking semantics for SecureFiles LOB blocks: Oracle Database no longer uses a cache fusion lock for each block of the SecureFiles. Instead, it amortizes the cost of going to DLM by covering all SecureFiles LOB blocks using a single DLM lock. This feature moves LOB performance close to that of other file systems. SecureFiles: Deduplication Oracle Database can now automatically detect duplicate SecureFiles LOB data and conserve space by storing only one copy. This feature implements disk storage, I/O, and redo logging savings for SecureFiles. SecureFiles: Encryption This feature introduces a new, more efficient encryption facility for SecureFiles. The encrypted data in now stored in-place and is available for random reads and writes. The benefit of this feature is enhanced data security. SecureFiles: Inodes New storage structures for SecureFiles have been designed and implemented in this release to support high performance (low-latency, high throughput, concurrent, space-optimized) transactional access to large object data. In addition to improving of basic data access, the new storage structures also support rich functionality all with minimal performance cost, such as: • Implicit compression and encryption • Data sharing • User-controlled versioning Note: The COMPATIBLE initialization parameter must be set to 11.1 or higher to use SecureFiles. The BasicFile (previous LOB) format is still supported under 11.1 compatibility. There is no downgrade capability after 11.1 is set.
Oracle Database 11g: New Features for Administrators 15 - 4
Benefits of SecureFile LOBS SecureFiles • Are easy to use • Provide superior performance over BASICFILE LOBs • • • • •
Have fewer tunable storage parameters Reduce fragmentation Improve DML performance Improve read/writes performance Provide efficient reuse of free space
SecureFile
– Flashback mode – AUTO mode – No Retention mode
• Enable tuning of space management performance 15 - 5
Benefits of SecureFile LOBS The new SecureFile space management model eliminates most of the drawbacks of basic LOBS. SecureFiles are easy to use and provide improved performance when compared to LOBs in earlier database releases. The key benefits of SecureFiles are: Fewer tunable storage parameters: There is no requirement to specify parameters like FREEPOOLS, FREELIST GROUPS, PCTVERSION, RETENTION and CHUNK. FREEPOOLS and FREELIST GROUPS were provided as a hint to specify the concurrency in RAC. While the former can be altered offline, the latter cannot even be altered. These parameters are no longer available. Fragmentation: In prior releases CHUNK provided the ability to batch I/O. The size of the chunk was user specifiable, the default being the tablespace block size. In Oracle database 11g CHUNK is a hidden concept, totally managed by the database. The chunk sizes vary dynamically based on factors like size of the LOB, availability of space in the segment and other factors. By using variable sized chunks, internal fragmentation is minimized. The second type of fragmentation that impacted the I/O performance in prior releases is the fragmentation of LOB instantiation. The LOB was allocated in too many small sized chunks or the chunks were not co-located in disk. This had a tremendous impact on I/O by increasing the seek times and not maximizing the I/O bandwidth by prefetching the several fragments. With the new LOB storage chunks are of varying size from block size to 64M. Best effort is made to place contiguous data in physically adjacent locations on disk. Oracle Database 11g: New Features for Administrators 15 - 5
Benefits of SecureFile LOBS Performance of DML Operations: Performance improvement of space search is many-fold: • Separating the committed and uncommitted data in different data structures avoids having to verify the transactional state of the chunks before consuming them. • Deletes are several times faster since the time to delete is not proportional to the size of the LOB instantiation, rather it depends on the number of LOB instantiations freed. • Avoiding space management activities like updating the metadata structures by batching the allocation of metadata structures • Keeping in-memory statistics on concurrency levels help better distribute heat on data structures • In-memory stats on the space usage patterns help make better decision in doing pro-active space management • An in-memory fast space allocator tries to dispense free space by reading from the SGA. When space is sufficiently kept free, allocations are made with zero block reads. • A background process tries to maintain free space in the segments by pre-allocating space based on segment growth. Performance of Read/Writes: Delayed allocation of space is used to improve data co-location and minimize fragmentation of LOBs. Data is cached in the write gather cache before space layer is called to allocate space on disk. This reduces the number of calls for metadata management and significantly improves the ability to use large chunks of space. During segment growth when requests come serially, the server allocates chunks that are physically adjacent. Provide efficient reuse of free space: The undo generated on LOB columns is huge. The undo is not copied to the undo tablespace since it entails a huge I/O performance overhead due to writing to redo logs and undo segment. Rollback of such transactions also suffers performance impact due to reading from undo segments and copying back all the data into the LOB segment. Oracle uses shadow paging technique to provide transaction recovery and complete recovery. In the new storage scheme, updated data is left in the original blocks, new blocks are allocated to contain the changes and pointers to the blocks updated in the metadata to reflect the change. During transaction recovery, it is sufficient to flip the pointers in the metadata. Since metadata blocks are transactionally managed, complete recovery on the metadata blocks reveals the right LOB blocks. In the new storage architecture, for updates involving smaller pieces of data, the changes are made through in place updates. The freed space in the LOB segment is ordered on “freed time” and a FIFO based reuse mechanism is used to reclaim the oldest committed free space first. • Flashback Mode: When the database is in flashback mode, the space requirements for LOB undo retained in the LOB segment can be very high. The user can restrict the space usage for the LOB segment by either specifying a limit on the LOB segment size or specifying a minimum duration for which the undo should be retained in the LOB segment. • AUTO mode: In this mode, the goal of space management is to provide complete recovery for LOBs. Flashback is not guaranteed. LOB_UNDO_RETENTION is computed as the maximum query length for the LOB segment. MIN(AUM_UNDO_RETENTION, LOB_UNDO_RETENTION) is used to retain the undo in LOB segment. • No Retention mode: Provides no retention of versioned space for purposes of benchmark and for users who do not anticipate flashback or complete recovery. Committed undo is not retained. The committed undo can be reclaimed in any order. Provide monitoring to tune space management performance: One of the many reasons due to which the prior LOB implementation regressed in performance was because the implementation suffered from the lack of self-adjustment of system resources (memory, space, etc) to adapt to dynamic workload. Previously a LOB segment neither pre-allocated space under high DML activity nor did it release space when there was no DML activity for a long time. Another example is that previously when LOBs became fragmented during workloads, Oracle Database 11g: New Features forconcurrent Administrators 15 there - 6 was no way to defragment them to improve their read performance.
Enabling SecureFiles Storage
SecureFiles storage can be enabled : • Using the DB_SECUREFILE initialization parameter, values: – ALWAYS | PERMITTED | NEVER | IGNORE
• Using the ALTER SESSION | SYSTEM command: SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS';
Enabling SecureFiles Storage The DB_SECUREFILE initialization parameter allows DBAs to determine the usage of SecureFiles. Valid values are: • PERMITTED: Allow SecureFiles to be created(default). • NEVER: Disallow SecureFiles from being created going forward. • ALWAYS: Force all LOBs created going forward to be SecureFiles . • IGNORE: Disallow SecureFiles and ignore any errors that would otherwise be caused by forcing BasicFiles with SecureFiles options. If NEVER is specified, any LOBs that are specified as SecureFiles are created as BasicFiles. All SecureFiles specific storage options and features (for example compression, encryption, deduplication) will cause an exception. The BasicFile defaults will be used for storage options not specified. If ALWAYS is specified, all LOBs created in the system are created as SecureFiles. The LOB must be created in an Automatic Segment Space Management(ASSM) tablespace, else an error occurs. Any BasicFile storage options specified will be ignored. The SecureFiles defaults for all storage can be changed using the ALTER SYSTEM command as shown above.
Oracle Database 11g: New Features for Administrators 15 - 7
Creating SecureFiles LOB CREATE TABLE func_spec( id number, doc CLOB ENCRYPT USING 'AES128' ) LOB(doc) STORE AS SECUREFILE (DEDUPLICATE LOB CACHE NOLOGGING); CREATE TABLE test_spec ( id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (COMPRESS HIGH KEEP_DUPLICATES CACHE NOLOGGING); CREATE TABLE design_spec (id number, doc LOB(doc) STORE AS SECUREFILE (ENCRYPT);
CLOB)
CREATE TABLE design_spec (id number, doc CLOB ENCRYPT) LOB(doc) STORE AS SECUREFILE;
Creating SecureFiles LOB You create a SecureFiles when the storage keyword SECUREFILE appears in the CREATE TABLE statement with a LOB column. If the keyword SECUREFILE is not used, or if the keyword BASICFILE is used then a basic LOB (as in prior releases) is created. BASICFILE is the default storage. The illustration above shows two examples of creating SecureFiles. In the first example you are creating a table called FUNC_SPEC to store documents as SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the LOB should be cached when read, and redo should not be generated when updates are performed to the LOB. In addition you are specifying that the documents stored in the doc column should be encrypted using AES128 encryption algorithm. KEEP_DUPLICATE is the opposite of DEDUPLICATE, and can be used in an ALTER statement. In the second example above you are creating a table called DESIGN_SPEC which stores documents as SecureFiles. For this table you have specified that duplicates may be stored, the LOBs should be stored in compressed format and should be cached but not logged. Default compression is MEDIUM which is the default. The compression algorithm is implemented on the server-side which allows for random reads and writes to LOB data. That property can also be changed via ALTER statements.
Oracle Database 11g: New Features for Administrators 15 - 8
Creating SecureFiles LOB (Continued) The third and fourth example on the slide are semantically exactly the same. The difference is mainly syntactical. The first version of the statement uses the new ENCRYPT option within the SECUREFILE clause. The second version uses the ENCRYPT key word directly after the column type. Both versions are using TDE to encrypt the corresponding column. Note: For a full description of the options available for the CREATE TABLE statement please see the Oracle Database SQL Reference.
Oracle Database 11g: New Features for Administrators 15 - 9
SecureFiles Key Parameters
• • • •
CHUNKSIZE: Deprecated PCTVERSION: Does not apply to SecureFiles MAXSIZE: Specify maximum segment size RETENTION: Specify retention policy to use: – – – –
15 - 10
MAX: Keep old versions until MAXISE is reached. MIN: Keep old versions at least MIN seconds. AUTO: Default NONE: Reuse old versions as much as possible
SecureFiles Key Parameters CHUNKSIZE is deprecated. Although you can still use it, it is not necessary as it is not used internally. PCTVERSION does not apply to SecureFiles anymore. MAXSIZE is a physical storage attribute for your SecureFiles. It specifies the maximum segment size related to the storage clause level. RETENTION is an overloaded key word. It signification for SecureFiles is the following: • MAX is used to start reclaiming old versions once segment MAXSIZE is reached. • MIN keeps old version for the specified least amount of time. • AUTO is the defualt setting which is basically a tradeoff between space and time. This is automatically determine. • NONE reuses old versions as much as possible.
Oracle Database 11g: New Features for Administrators 15 - 10
Altering SecureFiles ALTER TABLE t1 MODIFY LOB(a) ( KEEP_DUPLICATES );
ALTER TABLE t1 MODIFY LOB(a) (COMPRESS HIGH); Enable compression on SecureFiles within a single partition ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( COMPRESS HIGH ); Enable encryption using 3DES168. ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT USING '3DES168');
Altering SecureFiles DEDUPLICATE/KEEP_DUPLICATES: The DEDUPLICATE option allows you to specify that LOB data which is identical in two or more rows in a LOB column should all share the same data blocks. The opposite of this is KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs with identical content into a single copy, reducing storage and simplifying storage management. VALIDATE: Perform a byte-by-byte comparison of the SecureFiles with the SecureFiles that has the same secure hash value, to verify the SecureFiles match before finalizing deduplication. The LOB keyword is optional and is for syntactic clarity only. COMPRESS/NOCOMPRESS: Enables or disables LOB compression. All LOBs in the LOB segment are altered with the new setting. ENCRYPT/DECRYPT: Turns on or turns off LOB encryption using TDE. All LOBs in the LOB segment are altered with the new setting. A LOB segment can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be updated using the ALTER TABLE REKEY syntax. Encryption is done at the block level as the last thing. This allows for better performance (smallest encryption amount possible) when combined with other options. RETENTION: Altering RETENTION only effects space created after the ALTER TABLE statement was executed.
Oracle Database 11g: New Features for Administrators 15 - 11
Storing SecureFiles The centerpiece of disk structures for new LOBs is the inode which so named for its semantic similarity to inodes in traditional file-systems. A lob inode is a self-contained description of the user-data in a lob. Self-contained implies that, except for dictionary-level information, the inode does not refer to data in other segments in order to describe and access a lob. Additionally, the lob inode is transportable. When moving new lob tablespaces across platforms, the information in a lob inode is either converted per the dictates of the transportable tablespace infrastructure, or else is already stored in a platform-independent format and requires no conversion. Both the lob inode and lob user-data are stored in block format in a lob segment . A lob segment is a new segment type composed of a new type of transaction-managed blocks. New LOB segments are like automatic segment space management (ASSM) segments, but have additional space-management structures that are designed to make the allocation and de-allocation of large chunks of contiguous disk space extremely fast. The lob inode is a highly structured, variable-sized entity that describes coarse-grained user-level properties of a lob such as : • Byte and character length • User-data checksum or hash • Presence or absence of compression and encryption and the specific algorithms used if any presence or absence of user-controlled versions of the lob The lob inode also describes fine-grain storage for the user-data blocks of a lob such as the lob map. Oracle Database 11g: New Features for Administrators 15 - 12
Storing SecureFiles (Continued) The goal of the lobmap is to allow the inode layer to efficiently access any random byte offset within the user-data in addition to the more obvious requirement of mapping and accessing all bytes in the lob. The lobmap is a hybrid of different types of persistent data structures, each optimized for a different range of logical offsets in the lob. It is important to note that the lobmap is not a user-visible structure. User-data itself is stored in transaction-managed blocks in the lob segment, and contiguous ranges of such blocks are grouped into physical and logical units called chunks. As with the lobmap, chunks are visible only to the Inode and Space Management layers: within the Inode layer, chunks are the entities that the lobmap describes; within the Space Management layer, chunks are the ranges of disk blocks allocated and de-allocated as a unit. As far as the user of the lob is concerned, there are no preferred sizes or granularities or alignments for any of their data access---the Inode layer internally chooses the most appropriate chunking granularities for the user-data based on available resources. Note: These chunks are distinct from the chunks that comprise LOBs available in prior releases in almost every respect. The Inode layer provides the following functionality: • Support all existing functionality and access APIs for both old and new LOBs. • Provide improved performance for SecureFile lobs. • Implement new SecureFile functionality - Compression - Encryption - varying-width encoding - hashing, - versioning - sharing Note: There are no longer LOB indexes created for SecureFiles.
Oracle Database 11g: New Features for Administrators 15 - 13
Accessing SecureFiles Metadata Data layer interface is the exact same as with BASICFILES!
Accessing SecureFiles Metadata Regarding accessing SecureFile data itself, you use the exact same interface as with BasicFiles. DBMS_LOB Package: LOBs inherit the LOB column settings for deduplication, encryption, and compression, which can also be configured on a per-LOB level using the LOB locator API. However, the LONG API cannot be used to configure these LOB settings. You must use the following DBMS_LOB package additions for these features: • DBMS_LOB.GETOPTIONS: Settings can be obtained using this function. An integer corresponding to a pre-defined constant based on the option type is returned. • DBMS_LOB.SETOPTIONS:This procedure sets features and allows the features to be set on a per-LOB basis, overriding the default LOB settings. Incurs a round trip to the server to make the changes persistent. • DBMS_LOB.GET_DEDUPLICATE_REGIONS:This procedure outputs a collection of records identifying the deduplicated regions in a LOB. LOB-level deduplication contains only a single deduplicated region. DBMS_SPACE.SPACE_USAGE: The existing SPACE_USAGE procedure is overloaded to return information about LOB space usage. It returns the amount of disk space in blocks used by all the LOBs in the LOB segment. This procedure can only be used on tablespaces that are created with ASSM and does not treat LOB chunks belonging to BasicFiles as used space. Note: For further details please see the Oracle Database PL/SQL Packages and Types Reference.
Oracle Database 11g: New Features for Administrators 15 - 14
Record-oriented SecureFile LOB for XML Index Improvement
Record-oriented SecureFile LOB for XML Index Improvement Oracle Database 11g supports partial update in the form of delta update. DBMS_LOB package, OCI and other API are extended to support new piece-wise update calls .Lob reorganization occurs automatically during an update call when the server evaluates that a reorganization operation is more beneficial than a delta update operation. This operation is fully transparent to the client, only the piecewise update call would appear slow to the client in this case. Lob updates support CLOB, NCLOB . The API for CLOB and NCLOB is the same as BLOB only the offset fields are interpreted as character offsets.
Oracle Database 11g: New Features for Administrators 15 - 15
Migrating to SecureFiles There are two recommended methods for migration of BasicFiles to SecureFiles. These are partition exchange and on-line redefinition. Partition Exchange: • Needs additional space equal to the largest of the partitions in the table. • Can maintain indexes during the exchange. • Can spread the workload out over several smaller maintenance windows. • Requires that the table or partition needs to be offline to perform the exchange. Online Redefinition (Best recommended practice): • No need to take the table or partition offline. • Can be done in parallel. • Requires additional storage equal to the entire table and all LOB segments to be available. • Any global indexes must be rebuilt. If you want to upgrade your BasicFiles to SecureFiles, you need to upgrade by the normal methods typically used to upgrade data (for example, CTAS/ITAS, online redefinition, export/import, column to column copy, or using a view and a new column). Most of these solutions mean using two times the disk space used by the data in the input LOB column. However, doing partitioning and taking these actions on a partition-by-partition basis may help lower the disk space required.
Oracle Database 11g: New Features for Administrators 15 - 16
SecureFile Migration Example create table tab1 (id number not null, c clob) partition by range(id) (partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1, partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2, partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3);
Insert your data create table tab1_tmp (id partition by range(id) (partition p1 values less partition p2 values less partition p3 values less
number not null, c clob) than (100) tablespace tbs1 lob(c) store as lobp1, than (200) tablespace tbs2 lob(c) store as lobp2, than (300) tablespace tbs3 lob(c) store as lobp3);
begin dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c'); dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1, true,true,true,false,error_count); dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp'); end;
SecureFile Migration Example The above example can be used to migrate BasicFile to SecureFile LOBs. First, you create your table using BasicFiles. The example uses a partitioned table. Then, you insert data in your table. Following that, you create a transient table that has the same number of partitions but this time using secure files. Note that this transient table has the same columns and types. Last part is to redifine your table using the previously created transient table.
Oracle Database 11g: New Features for Administrators 15 - 17
SecureFile Monitoring
All the same mechanisms: • *_LOBS / *_LOB_PARTITIONS / *_PART_LOBS – New SECUREFILE column
• SYS_USER_SEGS / SYS_DBA_SEGS – New SECUREFILE segment subtype – New RETENTION column – New MINRETENTION column for RETENTION MIN
Oracle Database 11g: New Features for Administrators 16 - 1
Objectives
After completing this lesson, you should be able to: • Describe and use the enhanced online table redefinition and materialized views • Describe finer grained dependency management • Describe and use the enhanced PL/SQL recompilation mechanism • Use enhanced DDL – Apply the improved table lock mechanism – Create and use invisible indexes
Oracle Database 11g: New Features for Administrators 16 - 2
Objectives
After completing this lesson, you should be able to: • Use PL/SQL result cache • Create Bitmap join indexes for IOT • Describe System Managed Domain Indexes • Use automatic Native PL/SQL and Java Compilation • Use Client query Cache
Online Table Redefinition Enhancements When a table is redefined online, it is accessible to both queries and DML during much of the redefinition process. The process is enhanced in Oracle Database 11g to support tables with materialized views and view logs. In addition, online redefinition supports triggers with the FOLLOWS or PRECEDES clause, which establishes an ordering dependency between the triggers. Also, PL/SQL and dependent objects are not invalidated after a redefinition, unless they are logically affected. You can redefine a table online with the Enterprise Manager Reorganize Objects wizard or with the DBMS_REDEFINITION package. Note: You can access the Reorganize Objects wizard from the Schema sub-page.
Oracle Database 11g: New Features for Administrators 16 - 4
Online Redefinition Wizard In prior database versions, a table cannot be redefined if it has a log or materialized views (MV). In Oracle Database 11g, you can redefine tables with materialized views and MV logs. You can clone the materialized view log onto the interim table just like triggers, indexes and other similar dependent objects. At the end of the redefinition, rowid logs are invalidated. Initially, all dependent materialized views need to do a complete refresh. This enhancement saves you the effort and time to drop and recreate the materialized views and the materialized view logs. Note that for materialized view logs and queue tables, online redefinition is restricted to changes in physical properties. No horizontal or vertical sub-setting is permitted, nor are any column transformations. (The only valid value for the column mapping string is NULL).
Oracle Database 11g: New Features for Administrators 16 - 5
Redefinition and Materialized View The example shows redefinition of the the HR.LOCATION_MV materialized view and the HR.MLOG$_LOCATIONS view log based on HR.LOCATIONS table 1. Invoke the Reorganize Objects wizard. 2. Select all database objects related to HR.LOCATIONS. 3. This example uses default options. 4. The Reorganize Objects wizard analyses the space needed and displays an Impact Report.
Oracle Database 11g: New Features for Administrators 16 - 6
Continuing with the Example: 5. Schedule the reorganization for immediate execution. 6. Review the Script Summary and Full Script. (You may wish to save the Full script).
Oracle Database 11g: New Features for Administrators 16 - 7
Continuing with the Example: 7. Submit the job. 8. Verify its successful execution. Best practice tip: You should start the redefinition process prior to the start of the downtime and the downtime should be used complete the redefinition.
Oracle Database 11g: New Features for Administrators 16 - 8
Steps in Redefining a Table using PL/SQL 1. Choose the redefinition method 2. Use DBMS_REDEFINITION.CAN_REDEF_TABLE procedure to verify the table can be redefined 3. Create an empty interim table without indexes 4. Use DBMS_REDEFINITION.START_REDEF_TABLE procedure to start redefinition 5. Create indexes on the interim table 6. Use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS to copy dependent objects into theinterim table 7. Check for errors in DBA_REDEFINITION_ERRORS views 8. Use DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure to complete the redefinition 9. Drop the interim table 16 - 9
Steps in Redefining a Table using PL/SQL 1. Choose the redefinition method: by key (primary key or pseudo-primary key) or by rowid (if no key is available) 2. Verify that the table is a candidate for online redefinition with the CAN_REDEF_TABLE procedure. 3. Create an empty interim table (in the same schema as the table to be redefined) with the desired logical and physical attributes, but without indexes. Optionally and best practice: If you are redefining a large table and want to improve the performance of the next step by running it in parallel, issue the following statements: ALTER SESSION FORCE PARALLEL DML PARALLEL ; ALTER SESSION FORCE PARALLEL QUERY PARALLEL ; 4. Start the redefinition process by calling the START_REDEF_TABLE procedure. If you did not define indexes under step 3, the initial copy uses direct path inserts and does not have to maintain indexes at this point. This is a performance benefit. 5.Create any indexes and other dependent objects on the interim table. 6. Copy dependent objects of the original table onto the interim table with the COPY_TABLE_DEPENDENTS procedure. This procedure clones and registers dependent objects of the base table, such as triggers, indexes, materialized view logs, grants, and constraints. This procedure does not clone the already registered dependent objects.
Oracle Database 11g: New Features for Administrators 16 - 9
Steps in Redefining a Table using PL/SQL (Continued) 7. Query the DBA_REDEFINITION_ERRORS view to check for errors. Optionally and best practice: Synchronize the interim and the original tables periodically with the SYNC_INTERIM_TABLE procedure. Perform a final synchronization before completing the redefinition. 8. Complete the redefinition with the FINISH_REDEF_TABLE procedure. 9. Drop the interim table. The following are the end results of the redefinition process: • The original table is redefined with the columns, indexes, constraints, grants, triggers, and statistics of the interim table. • Dependent objects that were registered, either explicitly through the REGISTER_DEPENDENT_OBJECT procedure or implicitly through the COPY_TABLE_DEPENDENTS procedure, are renamed automatically, so that dependent object names on the redefined table are the same as before redefinition. If no registration is done or no automatic copying is done, then you must manually rename the dependent objects. • The referential constraints involving the interim table now involve the redefined table and are enabled. • Any indexes, triggers, materialized view logs, grants, and constraints defined on the original table (prior to redefinition) are transferred to the interim table and are dropped when the user drops the interim table. Any referential constraints involving the original table before the redefinition now involve the interim table and are disabled. • PL/SQL procedures and dependent objects are invalidated, if they are logically affected by the redefinition. They are automatically revalidated whenever they are used next. Note: The revalidation can fail if the logical structure of the table was changed as a result of the redefinition process.
Oracle Database 11g: New Features for Administrators 16 - 10
More Precise Dependency Metadata
Recording of additional, finer-grained dependency management example: • Prior to Oracle Database 11g, adding column D to table T invalidated the dependent objects. • Starting in Oracle Database 11g, adding column D to table T does not impact view V and does not invalidate the dependent objects.
Starting with Oracle Database 11g, you have access to records that describe more precise dependency metadata. This is called fine-grain dependencies and it enable you to see when that dependent objects are not invalidated without logical requirement. Earlier Oracle Database releases record dependency metadata.—for example, that PL/SQL unit P depends on PL/SQL unit F, or that view V depends on table T—with the precision of the whole object. This means that dependent objects are sometimes invalidated without logical requirement. For example, if view V depends only on columns A and B in table T, and column D is added to table T, the validity of view V is not logically affected. Nevertheless, before Oracle Database Release 11.1, view V is invalidated by the addition of column D to table T. With Oracle Database Release 11.1, adding column D to table T does not invalidate view V. Similarly, if procedure P depends only on elements E1 and E2 within a package, adding element E99 to the package does not invalidate procedure P. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability, both in the development environment and during online application upgrade.
Oracle Database 11g: New Features for Administrators 16 - 11
Fine-Grain Dependency Management
Adding a column to a table no longer impacts dependent views and does not invalidate the dependent objects. • Dependencies are tracked automatically • Requires no configuration
CREATE VIEW NEW_EMPOYEES AS SELECT LAST_NAME FROM EMPOYEES WHERE EMPLOYEE_ID > 20;
16 - 12
Dependent unit Cross-unit reference Parent unit Cross-unit reference
Fine-Grain Dependency Management In Oracle Database 11g, you now have access to records that describe more precise dependency metadata. This is called fine-grain dependencies and enables you to see when dependent objects are not invalidated without logical requirement. Oracle Database 11g dependencies are tracked at the element level within a unit. Element-based dependency tracking covers the following: • Dependency of a single-table view on its base table • Dependency of a PL/SQL program unit (package specification, package body, or subprogram) on the following: - Other PL/SQL program units - Tables - Views A cross-unit reference creates a dependency from the unit making the reference (the dependent unit, for example the NEW_EMPLOYEES view above) to the unit being referenced (the parent unit, for example the EMPLOYEES table). Dependencies are always tracked automatically by PL/SQL and SQL compilers. This mechanism is available out-of-the-box, and does not require any configuration. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability.
Oracle Database 11g: New Features for Administrators 16 - 12
Fine-Grain Dependency Benefit Example CREATE TABLE t (col_a NUMBER, col_b NUMBER, col_c NUMBER); CREATE VIEW v AS SELECT col_a, col_b FROM t;
1
SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS ---------------- ---------- ---------------- ----------------- ------V VIEW T TABLE VALID ALTER TABLE t ADD (col_d VARCHAR2(20));
2
SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS ---------------- ---------- ---------------- ----------------- ------V VIEW T TABLE VALID
Fine-Grain Dependency Benefit Example In the first example above, table T is created with three columns, COL_A, COL_B, and COL_C. A view named V is created based on columns COL_A and COL_B of table T. The dictionary views are queried and the view V is dependent on table T and its status is valid. In the second example above, table T is altered. A new column named COL_D is added. The dictionary views still report the view V is dependent because element based dependency tracking realizes that the columns COL_A and COL_B are not modified and therefore, the view does not need to be invalidated.
Oracle Database 11g: New Features for Administrators 16 - 13
Fine-Grain Dependency Benefit Example CREATE PACKAGE pkg IS PROCEDURE p1; END pkg; / CREATE PROCEDURE p IS BEGIN pkg.p1(); END; / CREATE OR REPLACE PACKAGE pkg IS PROCEDURE p1; PROCEDURE unheard_of; END pkg; / SELECT status FROM user_objects WHERE object_name = 'P'; STATUS -------VALID
Fine-Grain Dependency Benefit Example In the example shown above, you create a package named PKG that has a call to a procedure P1. Another procedure named P invokes PKG.P1. The definition of the package PKG is modified and another subroutine is added to the package declaration. When you query the USER_OBJECTS dictionary view for the status of the P package, it is still valid because the element you added to the definition of PKG is not referenced through procedure P.
Oracle Database 11g: New Features for Administrators 16 - 14
Usage Guidelines Partial invalidation:
Original: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); END;
No invalidation: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); PROCEDUREPR2 PR2(V1 (V1VARCHAR2); VARCHAR2); PROCEDURE
CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; FUNCTION FUN3 FUN3 RETURN RETURN VARCHAR2; VARCHAR2; FUNCTION PROCEDURE PR1 (V1 VARCHAR2); PROCEDURE PR2 (V1 VARCHAR2); END;
Usage Guidelines to Reduce Invalidation 1. Add items to the end of a package to avoid changing slot numbers or entry point numbers of existing top-level elements. 2. Avoid SELECT *, table%rowtype and INSERT with no column names in PL/SQL units to allow for the ADD COLUMN functionality without invalidation. 3. Utilize views or synonyms to provide a layer of indirection between PL/SQL code and tables. The CREATE OR REPLACE VIEW command does not invalidate views and PL/SQL dependents if the view's new rowtype matches the old rowtype (this behavior is available in Oracle Database 10g Release 2). 4. Likewise, the CREATE OR REPLACE SYNONYM command does not invalidate PL/SQL dependents, if the old table and the new table have the same rowtype and privilege grants. Views and synonyms allow you to evolve tables independent of code in your application.
Oracle Database 11g: New Features for Administrators 16 - 15
Minimizing Dependent PL/SQL Recompilation
• After DDL commands • After online table redefinition • Transparent enhancement
In prior database versions all directly and indirectly dependent views and PL/SQL packages are invalidated after an online redefinition or other DDL operations. These views and PL/SQL packages are automatically recompiled whenever they are next invoked. If there are a lot of dependent PL/SQL packages and views, the cost of the revalidation or recompilation can be significant. In Oracle Database 11g, views, synonyms and other table dependent objects (with the exception of triggers) that are not logically affected by the redefinition, are not invalidated. So for example, if referenced column names and types are the same after the redefinition, then they would not be invalidated. This optimization is "transparent", that is, it is turned on by default. Another example: If the redefinition drops a column, only those procedures and views that reference the column are invalidated. The other dependent procedures and views remain valid. Note that all triggers on a table being redefined are invalidated (as the redefinition can potentially change the internal column numbers and data types), but they automatically revalidated with the next DML execution to the table.
Oracle Database 11g: New Features for Administrators 16 - 16
Serializing Locks
• Oracle Database 11g allows DDL commands to wait for DML locks • DDL_LOCK_TIMEOUT parameter set at system and session level • Values: 0 – 1000000 (in seconds) – 0: NOWAIT – 1,000,000: Very long WAIT
Serializing Locks You can limit the time that DDL commands wait for DML locks by setting the DDL_LOCK_TIMEOUT parameter at system or at session level. This initialization parameter is set by default to 0, that is NOWAIT, which ensures backwards compatibility. The range of values is 0 – 1000000 (in seconds). The maximum value of 1000000 seconds allows the DDL statement to wait for a very long time (11.5 days) for the DML lock. If the lock is not acquired on timeout expiration, your application should handle the timeout accordingly.
Oracle Database 11g: New Features for Administrators 16 - 17
Locking Tables Explicitly Useful for adding a column (without a default value) to a table that is frequently updated • Wait for up to 10 seconds for a DML lock: LOCK TABLE hr.jobs IN EXCLUSIVE MODE WAIT 10;
• Do not wait if another user already has locked the table: LOCK TABLE hr.employees IN EXCLUSIVE MODE NOWAIT;
• Lock a table that is accessible through the remote_db database link: LOCK TABLE hr.employees@remote_db IN SHARE MODE;
Locking Tables Explicitly DDL commands require exclusive locks on internal structures. If these locks are unavailable when a DDL command is issued, the DDL command fails, though it might have succeeded if it had been issued sub-seconds later. The WAIT option allows a DDL command to wait for its locks for a specified period of time before failing. The LOCK TABLE command has new syntax that lets you specify the maximum number of seconds the statement should wait to obtain a DML lock on the table. LOCK TABLE … IN lockmode MODE [NOWAIT | WAIT integer]
Specify NOWAIT if you want the database to return control to you immediately. If the specified table, partition, or table subpartition is already locked by another user, the database returns a message. Use the WAIT clause to indicate that the LOCK TABLE statement should wait up to the specified number of seconds to acquire a DML lock. There is no limit on the value of the integer. If you specify neither NOWAIT or WAIT, the database waits indefinitely until the table is available, locks it, and returns control to you. When the database is executing DDL statements concurrently with DML statements, a timeout or deadlock can sometimes occur. The database detects such timeouts and deadlocks and returns an error.
Oracle Database 11g: New Features for Administrators 16 - 18
Sharing Locks
The following commands will no longer acquire exclusive locks (X), but shared exclusive locks (SX). The benefits is that DML can continue while DDL is executed. This change is transparent, that is, there is not syntax change – CREATE INDEX ONLINE – CREATE MATERIALIZED VIEW LOG – ALTER TABLE ENABLE CONSTRAINT NOVALIDATE
Sharing Locks In highly concurrent environments, the requirement of acquiring an exclusive lock for example at the end of an online index creation and rebuild could lead to a spike of waiting DML operations and, therefore, a short drop and spike of system usage. While this is not an overall problem for the database, this anomaly in system usage could trigger operating system alarm levels. This feature eliminates the need row exclusive locks, when creating or rebuilding an online index.
Oracle Database 11g: New Features for Administrators 16 - 19
Invisible Indexes
• Index is altered as not visible to the optimizer: ALTER INDEX ind1 INVISIBLE;
• Optimizer considers this index for this statement: SELECT /*+ index(TAB1 IND1) */ COL1 FROM TAB1 WHERE …;
• Optimizer will always consider the index: ALTER INDEX ind1 VISIBLE;
• Creating an index as invisible initially: CREATE INDEX IND1 ON TAB1(COL1) INVISIBLE;
Invisible Indexes Oracle Database 11g allows you to create and alter indexes as invisible. An invisible index is maintained by DML operations, but it is not used by the optimizer during queries unless the query includes a hint that names the index. Using invisible indexes, you can: • Test the removal of an index before dropping it • Use temporary index structures for operations or modules of an application without affecting the overall application, for example during an application upgrade process. When an index is invisible, the optimizer generates plans that do not use the index. If there is no discernable drop in performance, you can then drop the index. If some queries show benefit from the index, you can make the index visible again, thus avoiding the effort of dropping an index and then having to recreate it. You can also create an index initially as invisible, perform testing and then determine whether to make the index available. You can query the VISIBILITY column of the *_INDEXES data dictionary views. SELECT INDEX_NAME, WHERE INDEX_NAME = INDEX_NAME -----------------IND1
VISIBILITY FROM USER_INDEXES ' IND1'; VISIBILITY ----------------VISIBLE
Oracle Database 11g: New Features for Administrators 16 - 20
Query Result Cache • Cache the result of a query or query block for future reuse • Cache is used across statements and sessions unless it is stale • Benefits: – Scalability – Reduction of memory usage Query Result Cache
Query Result Cache The Query Result Cache enables explicit caching of query result sets and query fragments in database memory. The cached result set data is transparently kept consistent with any changes done on the server side. Applications see improved performance for queries which have a cache hit and avoid round trips to the server for the sending of the query and fetching of the results. A separate shared memory pool is now used for storing and retrieving the cached results. Query retrieval from the query result cache is faster than re-running the query. Frequently executed queries see significant performance improvements when using the query result cache. The query results stored in the cache become invalid when data in the database objects being accessed by the query is modified. Note: Each node in a RAC configuration has a private result cache. The decision to use the result cache feature is a cluster wide decision. For more information on using result caches in a RAC configuration, please see the Oracle Database 11g Real Application Clusters documentation.
Oracle Database 11g: New Features for Administrators 16 - 21
Setting up Query Result Cache
• Set at database level using RESULT_CACHE_MODE initialization parameter. Values are : – AUTO: The optimizer determines which results are to be stored in the cache based on repetitive executions – MANUAL: Use the result_cache hint to specify results to be stored in the cache – FORCE: All results are stored in the cache
• Set at table level: ALTER TABLE employees RESULT_CACHE mode AUTO;
Setting up Query Result Cache The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. You can set the RESULT_CACHE_MODE parameter at the system, session, and the table level. The possible parameter values are AUTO, MANUAL, FORCE. When set to AUTO, the optimizer determines which results are to be stored in the cache based on repetitive executions. When set to MANUAL(the default), you must specify, by using the RESULT_CACHE hint, that a particular result is to be stored in the cache. When set to FORCE, all results are stored in the cache. The Query Result Cache can also be set at the table level using CREATE or ALTER statements. The syntax follows: CREATE/ALTER TABLE [<schema>.]
…. [RESULT_CACHE {(MODE AUTO| MANUAL|FORCE)}]
Setting the result cache mode at the table level ensures that whenever a query retrieves data from this table, the result is automatically stored in the result cache.
Oracle Database 11g: New Features for Administrators 16 - 22
Using the RESULT_CACHE Hint SELECT /*+ RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id; -------------------------------------------------------------| Id
-------------------------------------------------------------SELECT /*+ NO_RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id;
Using the Result_Cache Hint If you wish to use the query result cache and the RESULT_CACHE_MODE initialization parameter is set to MANUAL, you must explicitly specify the RESULT_CACHE hint in your query. This introduces the ResultCache operator into the execution plan for the query. When you execute the query, the ResultCache operator looks up the result cache memory to check if the result for the query already exists in the cache. If it exists, then the result is retrieved directly out of the cache. If it does not yet exist in the cache, then the query is executed, the result is returned as output, and is also stored in the result cache memory. If the RESULT_CACHE_MODE initialization parameter is set to AUTO or FORCE, and you do not wish to store the result of a query in the result cache, you must then use the NO_RESULT_CACHE hint in your query. For example, when the RESULT_CACHE_MODE value equals FORCE in the initialization parameter file, and you do not wish to use the result cache for the EMPLOYEES table, then use the NO_RESULT_CACHE hint. Note: Use of the [NO_]RESULT_CACHE hint takes precedence over the parameter settings.
Oracle Database 11g: New Features for Administrators 16 - 23
Managing the Query Result Cache
The following initialization parameters can be used to manage the Query result cache • RESULT_CACHE_MAX_SIZE parameter – Sets the memory allocated to the result cache – Result cache is disabled if you set the value to 0.
• RESULT_CACHE_MAX_RESULT – Sets maximum cache memory for a single result – Defaults to 5%
• RESULT_CACHE_REMOTE_EXPIRATION – Sets the expiry time for query result cache – Defaults to 0
Managing Query Results Cache You can alter various parameter settings in the initialization parameter file to manage the query result cache of your database. By default, the database allocates memory for the result cache in the Shared Pool inside the SGA. The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set the value to 0. • Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percent value between 1 and 100. This parameter can be implemented at the system and session level. • Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that accesses remote database objects remains valid. The default value is 0.
Oracle Database 11g: New Features for Administrators 16 - 24
Using the DBMS_RESULT_CACHE Package
Use the DBMS_RESULT_CACHE package to: • Manage memory allocation for the query result cache • View the status of the cache • Retrieve statistics on the cache memory usage: Create the report
EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT
• Remove all existing results and clear cache memory: Flush Cache
Using the DBMS_RESULT_CACHE Package The DBMS_RESULT_CACHE package provides statistics, information, and operators that enable you to manage memory allocation for the query result cache. You can use the DBMS_RESULT_CACHE package to perform various operations such as viewing the status of the cache, retrieving statistics on the cache memory usage, and flushing the cache. For example, to view the memory allocation statistics, use the following SQL procedure: SQL> set serveroutput on SQL> execute dbms_result_cache.memory_report
The output of this command will be similar to the following: R e s u l t C a c h e M e m o r y R e p o r t [Parameters] Block Size = 1024 bytes Maximum Cache Size = 720896 bytes (704 blocks) Maximum Result Size = 35840 bytes (35 blocks) [Memory] Total Memory = 46284 bytes [0.036% of the Shared Pool] ... Fixed Memory = 10640 bytes [0.008% of the Shared Pool] ... State Object Pool = 2852 bytes [0.002% of the Shared Pool] ... Cache Memory = 32792 bytes (32 blocks) [0.025% of the Shared Pool] ....... Unused Memory = 30 blocks ....... Used Memory = 2 blocks ........... Dependencies = 1 blocks ........... Results = 1 blocks ............... SQL = 1 blocks
Oracle Database 11g: New Features for Administrators 16 - 25
Viewing Result Cache Dictionary Information
The following views provide information about the query result cache: (G)V$RESULT_CACHE_STATISTICS
Lists the various cache settings and memory usage statistics.
(G)V$RESULT_CACHE_MEMORY
Lists all the memory blocks and the corresponding statistics.
(G)V$RESULT_CACHE_OBJECTS
Lists all the objects (cached results and dependencies) along with their attributes.
(G)V$RESULT_CACHE_DEPENDENCY
Lists the dependency details between the cached results and dependencies.
Viewing Result Cache Dictionary Information Note: For further information please see the Oracle Database Reference Guide.
Oracle Database 11g: New Features for Administrators 16 - 26
OCI Client Query Cache
• Extends server-side query caching to client side memory • Ensures better performance by eliminating round trips to the server • Leverages client-side memory • Improves server scalability by saving server CPU resources • Result cache is automatically refreshed if the result set is changed on the server • Particularly good for lookup tables
OCI Client Query Cache You can enable caching of query result sets in client memory with the OCI Client Query Cache in Oracle Database 11g. The cached result set data is transparently kept consistent with any changes done on the server side. Applications leveraging this feature see improved performance for queries which have a cache hit. Additionally, a query serviced by the cache avoids round trips to the server for sending the query and fetching the results. Server CPU, that would have been consumed for processing the query, is reduced thus improving server scalability. Before using client-side query cache, determine whether your application will benefit from this feature. Client-side caching is useful when you have applications that produce repeatable result sets, small result sets, static result sets or frequently executed queries.
Oracle Database 11g: New Features for Administrators 16 - 27
Using Client Side Query Cache You can use client-side query caching by: • Setting initialization parameters – CLIENT_RESULT_CACHE_SIZE – CLIENT_RESULT_CACHE_LAG
Using Client Side Query Cache The following two parameters can be set in your initialization parameter file: • CLIENT_RESULT_CACHE_SIZE: A non-zero value enables the client result cache. This is the maximum size of the client per-process result set cache in bytes. All OCI client processes get this maximum size and can be over-ridden by the OCI_RESULT_CACHE_MAX_SIZE parameter. • CLIENT_RESULT_CACHE_LAG : Maximum time (in milliseconds) since the last round trip to the server, before which the OCI client query execute makes a round trip to get any database changes related to the queries cached on client. A client configuration file is optional and overrides the cache parameters set in the server initialization parameter file. Parameter values can be part of a sqlnet.ora file. When parameter values shown above are specified, OCI client caching is enabled for OCI client processes using the configuration file: OCI_RESULT_CACHE_MAX_RSET_SIZE/ROWS Maximum size of any result set in bytes/rows in the per-process query cache. OCI applications can utilize application hints to force result cache storage. This overrides the deployment time settings of ALTER TABLE/ALTER VIEW. The application hints can be: • SQL hints /*+ result_cache */, and /*+ no_result_cache */ • OCIStmtExecute() modes. These override both SQL hints and ALTER TABLE/ALTER VIEW annotations. Note: To use this feature, your applications must be re-linked with release 11.1 or higher client libraries and be connected to a release 11.1 or higher server. Oracle Database 11g: New Features for Administrators 16 - 28
PL/SQL Function Cache • Stores function results in cache, making them available to other sessions. • Uses the Query Result Cache
PL/SQL Function Cache Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result caching mechanism. This caching mechanism provides you with a language-supported and systemmanaged means for storing the results of PL/SQL functions in a shared global area (SGA), which is available to every session that runs your application. The caching mechanism is both efficient and easy to use, and it relieves you of the burden of designing and developing your own caches and cache-management policies. Oracle Database 11g provides the ability to mark a PL/SQL function to indicate that its result should be cached to allow lookup, rather than recalculation, on the next access when the same parameter values are called. This function result cache saves significant space and time. This is done transparently using the input parameters as the lookup key. The cache is system-wide so that all distinct sessions invoking the function benefit. If the result for a given set of parameters changes, you can use constructs to invalidate the cache entry so that it will be properly recalculated on the next access. This feature is especially useful when the function returns a value that is calculated from data selected from schema-level tables. For such uses, the invalidation constructs are simple and declarative. You can include syntax in the source text of a PL/SQL function to request that its results be cached and, to ensure correctness, that the cache be purged when any of a list of tables experiences DML. When a particular invocation of the result-cached function is a cache hit, then the function body is not executed; instead, the cached value is returned immediately.
Oracle Database 11g: New Features for Administrators 16 - 29
Using PL/SQL Function Cache • Include the RESULT_CACHE option in the function declaration section of a package or function definition • Optionally include the RELIES_ON clause to specify any tables or views on which the function results depend CREATE OR REPLACE FUNCTION productName (prod_id NUMBER, lang_id VARCHAR2) RETURN NVARCHAR2 RESULT_CACHE RELIES_ON (product_descriptions) IS result VARCHAR2(50); BEGIN SELECT translated_name INTO result FROM product_descriptions WHERE product_id = prod_id AND language_id = lang_id; RETURN result; END;
Using PL/SQL Function Cache In the example shown above, the function productName has result caching enabled through the RESULT_CACHE option in the function declaration. In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS table on which the function results depend. Usage Notes: • If function execution results in an unhandled exception, the exception result is not stored in the cache • The body of a result cached function executes: - The first time a session on this database instance calls the function with these parameter values. - When the cached result for these parameter values is invalid. A cashed result becomes invalid when any database object specified in the RELIES_ON clause of the function definition changes. - When the cached result for these parameter values has aged out. If the system needs memory, it might discard the oldest cached values. - When the function bypasses the cache. • The function should not have any side effects • The function should not not depend on session-specific settings • The function should not depend on session-specific application contexts.
Oracle Database 11g: New Features for Administrators 16 - 30
PL/SQL Function Cache Considerations
PL/SQL Function Cache cannot be used when: • The function is defined in a module that has invoker's rights or in an anonymous block. • The function is a pipelined table function. • The function has OUT or IN OUT parameters. • The function has IN parameter of the following types: BLOB, CLOB, NCLOB, REF CURSOR, collection, object, or record. • The function's return type is: BLOB, CLOB, NCLOB, REF CURSOR, object, record or collection with one of the preceding unsupported return types.
Bitmap join index for IOT Oracle Database 11g extends bitmap join index support to Index Organized Tables (IOTs). A join index is an index on table T1 built for a column of a different table T2 via a join. Therefore, the index provides access to rows of T1 based on columns of the table T2. Join indexes can be used to avoid actual joins of tables or can reduce the volume of data to be joined by performing restrictions in advance. Bitmap join indexes are space-efficient and can speed up queries via bitwise operations. As in the case of Bitmap Indexes, these IOTs have an associated Mapping Table. Since IOT rows may change their position due to DML or index reorganization operations, the bitmap join index cannot rely on the physical row identifiers of the IOT rows. Instead the row identifier of the mapping table associated with the IOT will be used.
Oracle Database 11g: New Features for Administrators 16 - 32
Automatic “Native” Compilation
• 100+% faster for pure PL/SQL or Java code • 10% – 30% faster for typical transactions with SQL • PL/SQL – Just one parameter - On / Off – No need for C compiler – No file system DLLs
• Java – – – –
16 - 33
Just one parameter – On / Off JIT “on the fly” compilation Transparent to user (asynchronous, in background) Code stored to avoid recompilations
Automatic “Native” Compilation PL/SQL Native Compilation: The Oracle executable generates native dynamic linked lists (DLL) directly from the PL/SQL source code without needing to use a third-party C compiler. In Oracle Database 10g the DLL is stored canonically in the database catalog. In Oracle Database 11g, when it is needed, the Oracle executable loads it directly from the catalog without needing to stage it first on the file system. The execution speed of natively compiled PL/SQL programs will never be slower than in Oracle Database 10g and may be improved in some cases by as much as an order of magnitude. The PL/SQL native compilation is automatically available with Oracle Database 11g. No third-party software (neither a C compiler nor a DLL loader) is needed. Java Native Compilation: Enabled by default and similar to the JDK JIT, this feature compiles Java in the database natively and transparently without the need of a C compiler. The JIT runs as an independent session in a dedicated Oracle server process. There is at most one compiler session per database instance; it is Oracle RAC-aware and amortized over all Java sessions. This feature brings two major benefits to Java in the database: increased performance of pure Java execution in the database and ease of use as it is activated transparently, without the need of an explicit command, when Java is executed in the database. As this feature removes the need for a C compiler there is cost and license savings.
Oracle Database 11g: New Features for Administrators 16 - 33
Adaptive Cursor Sharing SELECT ……FROM.. WHERE Job = :B1 ENAME
Adaptive Cursor Sharing In many cases one optimizer plan may not always be appropriate for all bind values. In Oracle Database 11g cursor sharing has been enhanced so that the optimizer peeks at bind values during plan selection and takes ranges of safe values into account when evaluating cursor shareability. This enables you to leverage cursor sharing more commonly while preserving bind variable specific plan optimizations for shared statements. In the above example, assume that a query is retrieving information for EMPLOYEES based on a bind variable. In case 1 if the the bind variable value at hard parse is "CLERK" five out of six records will be selected. Therefore the execution plan will be a full table scan. In case 2 if "VP" is the bind variable value at hard parse one out of the six records is selected and the execution plan may be an index look-up. Therefore instead of the execution plan being reused for each value of the bind variable the optimizer looks at the selectivity of the data and determines a different execution plan to retrieve the data. The benefits of adaptive cursor sharing are: • The optimizer shares the plan when binds variable values are “equivalent”. • Plans are marked with a selectivity range. If current bind values fall within the range they use the same plan. • The optimizer creates a new plan if bind variable values are not equivalent. • The optimizer generates a new plan for each selectivity range. • The optimizer avoids expensive table scans and index searches based on selectivity criteria thus speeding up data retrieval.
Oracle Database 11g: New Features for Administrators 16 - 34
Adaptive Cursor Sharing Views
The following views provide information about Adaptive Cursor Sharing usage: V$SQL V$SQL_CS_HISTOGRAM
Two new columns show whether a cursor is bind-sensitive or bindaware. Shows the distribution of the execution count across the execution history histogram.
V$SQL_CS_SELECTIVITY
Shows the selectivity ranges stored for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks.
V$SQL_CS_STATISTICS
Shows execution statistics of a cursor using different bind sets.
Adaptive Cursor Sharing Views Determining whether a query is bind-aware, is all handled automatically, without any user input. However, information about what is going on is exposed through V$ views so that the DBA can diagnose any problems. Two new columns have been added to V$SQL: • IS_BIND_SENSITIVE: Indicate if a cursor is bind sensitive, value YES | NO. A query for which the optimizer peeked at bind variable values when computing predicate selectivities and where a change in a bind variable value may lead to a different plan is called bindsensitive. • IS_BIND_AWARE :Indicates if a cursor is bind aware, value YES | NO. A cursor in the cursor cache that has been marked to use extended cursor sharing is called bind-aware. V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across the three-bucket execution history histogram. V$SQL_CS_SELECTIVITY: Shows the selectivity ranges stored in a cursor for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. It contains the text of the predicates and the selectivity range low and high values. V$SQL_CS_STATISTICS:You use this view to find out whether executing the cursor with a different bind set, other than the ones used to build it, hinders performance. This view is populated with the information stored for the peeked bind set, and contains information for other bind sets only when running under diagnostic mode. The column PEEKED contains values YES if the bind set was used to build the cursor, and NO otherwise. Oracle Database 11g: New Features for Administrators 16 - 35
Temporary Tablespace Shrink • Sort segment extents are managed in memory once physically allocated. • This method can be an issue after big sorts are done. • To release physical space from your disks, you can shrink temporary tablespaces: – Locally-managed temporary tablespaces – Online operation CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; ALTER TABLESPACE temp SHRINK SPACE [KEEP 200m]; ALTER TABLESPACE temp SHRINK TEMPFILE 'tbs_temp.dbf';
Temporary Tablespace Shrink Huge sorting operations can cause temporary tablespace to grow a lot. For performance reasons, once a sort extent is physically allocated, it is then managed in memory to avoid physical deallocation later. As a result, you can end up with a huge tempfile that stays on disk until it is dropped. One possible workaround is to create a new temporary tablespace with smaller size, set this new tablespace as default temporary tablespace for users, and then drop the old tablespace. However, there is disadvantage that the procedure requires no active sort operations happening at the time of dropping old temporary tablespace. Starting with Oracle Database 11g Release 1, you can use the ALTER TABLESPACE SHRINK SPACE command to shrink a temporary tablespace, or you can use the ALTER TABLESPACE SHRINK TEMPFILE command to shrink one tempfile. For both commands, you can specify the optional KEEP clause that defines the lower bound that the tablespace/tempfile can be shrunk to. If you omit the KEEP clause, then the database will attempt to shrink the tablespace/tempfile as much as possible (total space of all currently used extents) as long as other storage attributes are satisfied. This operation is done online. However, if some currently used extents are allocated above the the shrink estimation, the system waits until they are released to finish the shrink operation. Note: The ALTER DATABASE TEMPFILE RESIZE command generally fails with ORA-03297 because the tempfile contains used data beyond requested RESIZE value. As opposed to the ALTER TABLESPACE SHRINK, the ALTER DATABASE command does not try to de-allocate sort extents once they are allocated. Oracle Database 11g: New Features for Administrators 16 - 36
DBA_TEMP_FREE_SPACE
• Lists temporary space usage information. • Central point for temporary tablespace space usage Column name
Description
TABLESPACE_NAME
Name of the tablespace
TABLESPACE_SIZE
Total size of the tablespace, in bytes
ALLOCATED_SPACE
Total allocated space, in bytes, including space that is currently allocated and used and space that is currently allocated and available for reuse
FREE_SPACE
Total free space available, in bytes, including space that is currently allocated and available for reuse and space that is currently unallocated
DBA_TEMP_FREE_SPACE This dictionary view reports temporary space usage information at tablespace level. The information is derived from various existing views.
Oracle Database 11g: New Features for Administrators 16 - 37
Tablespace Option for Creating Temporary Table
• Specify which temporary tablespace to use for your global temporary tables. • Decide proper temporary extent size. CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; CREATE GLOBAL TEMPORARY TABLE temp_table (c varchar2(10)) ON COMMIT DELETE ROWS TABLESPACE temp;
Tablespace Option for Creating Temporary Table Starting with Oracle Database 11g Release 1, it becomes possible to specify a TABLESPACE clause when you create a global temporary table. If no tablespace is specified, the global temporary table is created in your default temporary tablespace. In addition, indexes created on the temporary table are also created in the same temporary tablespace as the temporary table. This possibility allows you to decide proper extent size that reflects your sort-specific usage, especially when you have several types of temporary space usage.
Oracle Database 11g: New Features for Administrators 16 - 38
Real-Time Query and Physical Standby Databases In previous database releases, when you opened the physical standby database for read-only, redo application stopped. Oracle Database 11g allows you to use a physical standby database for queries while redo is applied to the physical standby database. This enables you to use a physical standby database for disaster recovery and to offload work from the primary database during normal operation. In addition, this feature provides a loosely coupled read-write clustering mechanism for OLTP workloads when configured as follows: • Primary database: Recipient of all update traffic • Several readable standby databases: Used to distribute the query workload The physical standby database can be opened read-only only if all the files have been recovered up to the same system change number (SCN), otherwise the open will fail.
Oracle Database 11g: New Features for Administrators 16 - 39
Summary
In this lesson, you should have learned how to: • Describe and use the enhanced online table redefinition and materialized views • Describe finer grained dependency management • Use enhanced DDL – Apply the improved table lock mechanism – Create and use invisible indexes