Clarification of Oracle Database Version Numbers
Nope don’t use it for mission critical Oracle databases if you value the DB… VDP has to quiesce (stop / silence) the DB – and this can cause performance and in my case – control file and eventually redolog corruption.
VMWares “Oracle Databases On VMWare Best Practices Guide” copyright 2016 VMWare states on page 54…
If you have the following in your alert log – it’s probably too late. At that point follow the directions in the alert log – or restore if a startup was attempted.
The controlfile header block returned by the OS
has a sequence number that is too old.
The controlfile might be corrupted.
Corrupted redo logs – or just need to restore from a back up with resetlogs – you’ll lose what is in the redologs – but you will have a db… requires a full backupset or level 0 and level 1 incrementals.
Don’t know your dbid – startup mount; select dbid from v$database;
Use controlfile autobackup or show all – hopefully autobackup is on – or you know where the current controlfile backup is.
If you are using an FRA – control file backup maybe in FRA autobackups dir unless explicitly set
rman target / log=dbbackup.log
show controlfile autobackup;
show controlfile autobackup format;
# must be in startup nomount to restore the controlfile
restore controlfile from autobackup;
# must be in mount mode – alter database mount – to restore
# ok, if you backups and archivelogs are good and you don’t mind losing you redologs…
alter database open resetlogs;
Without running runInstaller, dbca, opatch, or rman, pick up an existing DB and the Oracle binaries and move them to another server, here is one way (without all the screen shots):
- You only have one home and database under your $ORACLE_BASE – this example only does one DB – but you could modify the process below if you have more than one home or DB under your $ORACLE_BASE.
- Your datafiles are within the $ORACLE_BASE and your installation mount points will be the same on both the source and target servers. In my case, my source and destination both do and will reside in /u01/app.
$ORACLE_BASE/oradata/SID is standard default location for datafiles so if you pick up the entire $ORACLE_BASE you get all the databases under the base – assuming you used the default directory structures on the original source install.
Shutdown the database you want to clone.
Zip up or tar the source $ORACLE_BASE – this will combine, the Oracle binaries, the Oracle “admin” directory structure, the “FRA” if you set one up, and the Oracle datafiles / database itself. This directory architecture is the default structure, if you chose default install options on the source install – your $ORACLE_BASE may vary.
# cd to a mount shared by both servers – your will be different – or just ftp in binary
tar -cvf oracleBase.tar /u01/app/oracle
Copy the tar file from the source server to the target server root directory “/” – you may have to use root (copy commands not shown) – Unwind the tar file from the root directory “/”.
su – root
cd cd /media/sf_OEL6.6+DBSetups/
cp -p oracleBase.tar /
tar -xvf oracleBase.tar
If you used the root account to unwind the tar, change the ownership of the new directory structure using the root account
# your mount point may vary
chown -R oracle:oinstall app
You will probably want to copy your oracle .bash_profile from your target to your source
Change your $ORACLE_HOME/network/admin/listener.ora and tnsnames.ora to the new server/host name
Use Oracle Perl based clone tool as follows (your $ORACLE_HOME is likely different):
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE=”/u01/app/oracle/” $ORACLE_HOME=”/u01/app/oracle/product/12.1.0/dbhome_1″ OSDBA_GROUP=dba OSOPER_GROUP=dba -defaultHomeName
You’ll be prompted to run a root script – when this program completes, execute it.
Modify your target /etc/oratab to match your source.
Done deal – hope this helps. Mike
12c? Yes, but not for the features Oracle wants you to like. So far anyway, I’m not seeing much use of container DBs and I don’t know anyone using Oracle’s cloud. Don’t get me wrong, I’m know there are lots of customers using both features. And Oracle is blazing new paths with these type features.
No what’s grabbing my attention is the array of new features and enhancements to the product across the board in 12c.
I would add with the logarithmic growth of database sizes and table storage – the use of the pre-existing partitioning feature option (additional $$$) is for most TB+ sized databases and above almost a necessity these days and should be growing quickly.
One very interesting new 12c feature is ACFS support for database files – yes it’s on top of ASM but my understanding it’s a full features file system with advanced features like snapshots and real file system UNIX like commands to manipulate it. If ACFS eliminates some of ASM’s cryptic syntax that alone would be a huge plus but from what I read that is the tip of the iceberg…
Another feature I would to take advantage of is the use of stored Oracle Homes in a catalog for what Oracle is calling “Rapid Home Provisioning”. The feature is over due and when it is a mature product sure to be used heavily in large Enterprise environments – what manual Oracle Home cloning has been done by most large companies for a long time. Rapid Home Provisioning is a new feature only available in 18.104.22.168 and requires the use of Grid / ASM and HA NFS.
Here’s a link to an Oracle document addressing how to use the feature:
Unfortunately the setup and process for using RHP is quite complex and requires a good deal of setup at this time.
At this time, my best bet for Oracle Home cloning can be accomplished more simplistically by following the steps outlined in Oracle Support note will get it done:
RDBMS Oracle Home Cloning Using OUI (Doc ID 565009.1)
Hope this helps, Mike
# Put your tablespace name in here
df.GB_Tot_Space_BefAuto “Total Space Allocated”,
round(100*(fs.GB_FS_BefAuto/df.GB_Tot_Space_BefAuto),2) “Percent Free”,
df.GB_Tot_MaxBytes “MaxBytes To Allocate”,GB_Rem_ToBe_AutoAllocated “Remaining To AutoAllocate”
(select tablespace_name,round(sum(bytes)/(1024*1024*1024),2) GB_FS_BefAuto
where tablespace_name like ‘&&tsname’
group by tablespace_name) fs,
(select tablespace_name, round(sum(bytes)/(1024*1024*1024),2) GB_Tot_Space_BefAuto,
round(sum(decode(autoextensible,’YES’,MAXBYTES,’NO’,BYTES)) / (1024*1024*1024), 2) GB_Tot_MaxBytes,
round(sum(decode(autoextensible,’YES’,MAXBYTES,’NO’,BYTES)-BYTES)/ (1024*1024*1024), 2) GB_Rem_ToBe_AutoAllocated
where tablespace_name like ‘&&tsname’
group by tablespace_name) df
where df.tablespace_name like ‘&&tsname’
order by df.tablespace_name;
alter session set nls_date_format=’dd-mon-yy hh24:mi’;
SELECT A.*, Round(A.Count#*B.AVG#/1024/1024) HOURLY_Avg_Mb
FROM (SELECT To_Char(First_Time,’YYYY-MM-DD HH24′) HOUR, Count(1) Count#,
Min(RECID) Min#, Max(RECID) Max#
GROUP BY To_Char(First_Time,’YYYY-MM-DD HH24′)
ORDER BY 1 DESC) A,
Avg(BYTES) AVG#, Count(1) Count#, Max(BYTES) Max_Bytes, Min(BYTES) Min_Bytes
FROM v$log) B;
# Creating a pluggable container from the seed container pdb$seed
# First create a data sub-directory under the root container cdb$root data directory as follows (your path may vary to your oradata dir):
# while logged in as the oracle / Linux user that admins the 12c database
# BTW – I named this PDBMF11G because this was an 11g database, that I am migrating some of the schemas into a PDB (another post or two)
# then connected to cdb$root as sys or with dba privs execute the following create pluggable using the seed pdb$seed:
create pluggable database pdbmf11g
admin user mf11g identified by password
# make all the pluggable databases open when the root does – works on the next shutdown / startup cycle
CREATE or REPLACE trigger OPEN_ALL_PLUGGABLES
execute immediate ‘alter pluggable database all open’;
# Changing containers
alter session set container=pdbmf11g;
select * from v$containers;
alter session set container=cdb$root;
Hope this helps, Mike
When connect to the root container CDB$ROOT:
select c.name container, u.username username
from v$containers c, cdb_users u
order by c.name;
Just in case you are not certain which container you are currently connected to from sqlplus or sqldeveloper etc: