miércoles, 13 de noviembre de 2019

crsctl stat res -t shows a third ASM resource in a incorrect status in a two nodes cluster setup (12c, 18c, 19c)

Starting in 12c, onc we finish the setup of a two nodes cluster, output of csrctl stat res -t shows a third resource in a non-correct status

ora.asm
      1        ONLINE  ONLINE       rac18c1                  Started,STABLE
      2        ONLINE  ONLINE       rac18c2                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE


According to note 2132715.1 from Support, this is expected behaviour, since with Flex ASM, the default is 3 instances.

If we check the cardinality of the asm instance, we can verify it:

[oracle@rac18c2 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@rac18c2 ~]$

So, we can leave  it safely as it is, or we can change the cardinality.
According to the mentioned note, the best way to change this cardinality is change it to ALL:

[oracle@rac18c2 ~]$ srvctl modify asm -count ALL

We verify the cardinality again:

[oracle@rac18c2 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +DATA/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@rac18c2 ~]$


And verify it again with crsctl. We see that now is corrected:

[oracle@rac18c2 ~]$ crsctl stat res ora.asm -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac18c1                  Started,STABLE
      2        ONLINE  ONLINE       rac18c2                  Started,STABLE
--------------------------------------------------------------------------------
[oracle@rac18c2 ~]$


martes, 3 de septiembre de 2019

Error running DBCA - ORA-27528 Transport RDS required by Engineered System not available

After some time not being able to update my blog, due to my schedule, now will try to add some updates more often.
This is an error I found the other day while running dbca on a Exadata Virtual machine and using a prevoiuosly created template for the database


At first glance, could not find any related note in Oracle Support, but a search in Google got some results, and directed me to the correct note.
The problem happens because parameter cluster_interconnect is set to one or more specific IPs. Of course, those IPs are only available in one node but DBCA tries to use them for all nodes. Obviously, it fails.
Solution: There are a couple of solutions, but being in DBCA, the best thing to do is leave the parameter cluster_interconnect empty and update it after database has been created. This is what I did and worked like a charm.

 Reference: DBCA errors when cluster_interconnects is set (Doc ID 1373591.1)