Installation of the Demantra Base Application or Patches?
Create a C:/Tmp folder on the machine where the setup.exe will be executed prior to the installation as this will generate a log file to this folder
Note that there will also be some ancillary logs under the following folders:
Demand Planner\Database Objects\Oracle Server (Oracle back end database)
Demand Planner\Database Objects\Microsoft SQL Server (Sql Server back end database)
Error starting the WebServer and/or deploying the Demantra WAR file?
Review the Collaborator.log which is found under the ...Collaborator\demantra\logs folder on the web server where Demantra is running.
Also, each individual webserver (ex. OAS, JRUN, Tomcat, Websphere, etc...) will have their own individual logs in various directories which can also be consulted.
The individual webserver documentation should be reviewed for these log locations.
Errors with the Demantra Batch or Simulation Engine?
The key engine logs are found under the root drive of the machine where the engine is launched. The folder that holds these logs usually have Engine2k somewhere in the name.
Note that there should be at least Engine2k logs (these are the logs that show the actual splitting and forecasts that the engine does) in this folder.
However, to write the master EngineManager logs to this folder you need to go to the Engine Administrator (..Demand Planner\Analytical Engines\bin --> Settings --> Configure) and change the Engine Manager Log tab from STD to FILE.
It is NOT advised to re-register the engine after making this change
Issues with EBS Integration to Demantra?
In general, most EBS side activities (ex. Planning Data Pull) can be logged using Note 245974.1 however, the Shipment and Bookings Collection job can have extra logging turned on by changing MSD_DEM: Debug Mode to Yes (default is No).
Note that as the Shipment and Bookings Collection job moves to the other side of the wall from EBS side of the database to the Demantra side of the database (usually indicated by when you start seeing references to the Demantra schema or tables like t_src_items_tmpl or t_src_sales_tmpl_err) then the errors may need to be tracked using the Workflow or Active_Proc_Dyn troubleshooting sections of this note.
Stored Procedure Errors with Active_Proc_Dyn somewhere in the error message?
To view the errors go to Business Modeler --> Tools --> Procedure Error Log
Note that this looks to the backend table called db_exception_log
Oracle Demantra customer, are you considering an upgrade to you Oracle Demantra software?
There are three critical MOS notes that you may want to review.
12.2.1, 12.2.2, 12.2.3 and 12.2.4 Install Upgrade ADVISOR: Demantra (Doc ID 1664177.2)
DEMANTRA WARNING Have you OR are you Planning to Upgrade to Release 12.2.2? Upgrade from 12.2.2 to 12.2.3 or 12.2.4 Collections EP Load is Failing ORA-20002, ORA-00001, ORA-06512 (Doc ID 1917715.1)
Upgrading to Demantra 12.2.4? Here are current known issues from Demantra Development, Proactive Services and Oracle Support (Doc ID 1928367.1)
Interested in mastering Oracle Demantra Training? Enroll now for FREE demo on
Upgrading to 12.2.2? Do you want extra speed? Use a logon trigger for parallelism?
When upgrading your Demantra application from an older version directly to 12.2.2, you can minimize the upgrade time. You can force parallelism using a Logon trigger on the server.
Creation of new Indexes on sales data can take long time.
As part of the upgrade there is a script building a new set of indexes, one per engine profile, on each profile quantity_form expression.
The creation of the index can take a long time on big data tables.
Remember to disable or drop that trigger when the upgrade has completed.
The following trigger was created as SYS database user and tested successfully by creating DEMANTRA database user and logging in with it:
CREATE OR REPLACE TRIGGER force_parallel_ddl_trg
AFTER LOGON ON database
IF (USER='DEMANTRA') THEN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';
Make sure to:
Modify DEMANTRA to be the database schema name.
drop it after it is not needed anymore using the following:
DROP TRIGGER force_parallel_ddl_trg;
What happens if RVWR cannot write to disk?
It depends on the context where the write error occurs:
If there’s a Guaranteed Restore Point, the database crashes to ensure the restore point guarantee is not voided.
If there isn’t a Guaranteed Restore Point and it’s a primary database, the Flashback Mode will be automatically turned off for the database, which will continued to operate normally.
If there isn’t a Guaranteed Restore Point and it’s a standby database, the database will hang until the cause of the write failure is fixed.
Is it possible to specify the size of the Flashback Buffer in the SGA?
Yes, but indirectly. The size of the Flashback Buffer is set to 2 * LOG_BUFFER.
For performance reasons, it’s recommended to set LOG_BUFFER to at least 8MB for
databases running in Flashback Mode.
UPDATE: For large 220.127.116.11+ databases with more than a 4GB SGA, you may consider setting LOG_BUFFER to values in the range of 32-64 MB.
Can RMAN be used to backup flashback logs?
No. Flashback Logs are not backed up. Even if the command BACKUP RECOVERY AREA is used to backup the contents of the FRA to tape only the following file types are backed up: full and incremental backup sets, control file autobackups, datafile copies, and archived redo logs.
Flashback Logs are considered to be transient files and cannot be backed up by RMAN. They are not needed for media recovery.
When are the flashback logs deleted?
Flashback logs are managed by Oracle only. Oracle will try to keep as much Flashback logs as needed to satisfy the DB_FLASHBACK_RETENTION_TARGET parameter. However, if there’s space pressure in the Flash Recovery Area (FRA), flashback logs may be deleted to make room for other things, like backups and archived logs, for example.
If the fast recovery area has enough space, then a flashback log is created whenever necessary to satisfy the flashback retention target.
If a flashback log is old enough that it is no longer needed to satisfy the flashback retention target, then a flashback log is reused.
If the database must create a new flashback log and the fast recovery area is full or there is no disk space, then the oldest flashback log is reused instead.
If the fast recovery area is full, then an archived redo log that is reclaimable according to the FRA rules may be automatically deleted by the fast recovery area to make space for other files. In this case, any flashback logs that would require the use of that redo log file for the use of FLASHBACK DATABASE are also deleted.
No file in the fast recovery area is eligible for deletion if it is required to satisfy a guaranteed restore point. Thus, retention of flashback logs and other files required to satisfy the guaranteed restore point, in addition to files required to satisfy the backup retention policy, can cause the fast recovery area to fill completely.
How to list restore points in RMAN?
In RMAN you can use the LIST RESTORE POINT [ALL|restore_point_name] command. If you use a recovery catalog you can use the view RC_RESTORE_POINT in the recovery catalog repository, or the command the V$RESTORE_POINT in the target database..
After flashback’ing to a point-in-time before a RESETLOGS operation is it possible to flash forward to the incarnation after the RESETLOGS?
Yes, it’s perfectly possible.
Can you see the progress of a FLASHBACK DATABASE operation?
Yes, you can. During a FLASHBACK DATABASE operation you can query V$SESSION_LONGOPS from another session to see the progress of the flashback.
The FLASHBACK DATABASE operation has two distinct phases: the actual flashback and the media recovery that happens afterwards to bring the database to a consistent state.
While the actual flashback is running you’ll see the following message in V$SESSION_LONGOPS, on Oracle 11gR2:
Flashback Database: Flashback Data Applied : 238 out of 282 Megabytes done
During the media recovery, the following messages will be seen:
Media Recovery: Redo Applied : 263 out of 0 Megabytes done
Media Recovery: Average Apply Rate : 1164 out of 0 KB/sec done
Media Recovery: Last Applied Redo : 626540 out of 0 SCN+Time done
Media Recovery: Elapsed Time : 232 out of 0 Seconds done
Media Recovery: Active Time : 116 out of 0 Seconds done
Media Recovery: Active Apply Rate : 1859 out of 0 KB/sec done
Media Recovery: Maximum Apply Rate : 1859 out of 0 KB/sec done
Media Recovery: Log Files : 15 out of 0 Files done
Media Recovery: Apply Time per Log : 7 out of 0 Seconds done
How should I set the database to improve Flashback performance?
Oracle’s recommendations are:
Use a fast file system for your flash recovery area, preferably without operating system file caching. It is recommended to use a file system that avoids operating system file caching, such as ASM.
Configure enough disk spindles for the file system that will hold the flash recovery area. For large production databases, multiple disk spindles may be needed to support the required disk throughput for the database to write the flashback logs effectively.
If the storage system used to hold the flash recovery area does not have non-volatile RAM, try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance
For large, production databases, set the init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.