Call: +44 (0)1904 557620 Call

Pete Finnigan's Oracle Security Weblog

This is the weblog for Pete Finnigan. Pete works in the area of Oracle security and he specialises in auditing Oracle databases for security issues. This weblog is aimed squarely at those interested in the security of their Oracle databases.

Recovering PL/SQL Source Code

It has been possible to wrap PL/SQL for many years using Oracle tools and in the first iterations in Oracle 7 this was done with the Oracle 7 wrap.exe and this progressed through Oracle 8, 8i and finally 9iR2 also using wrap.exe. The format of the tool changed internally slightly and each version supported the later Oracle versions changes to the PL/SQL language itself.

In the first instance from Oracle 7 through to Oracle 9ir2 the wrap mechanism was a simple write to a file of the internal state of the PL/SQL compiler memory after the lexical and semantic analysis phases had completed. Then this file could be loaded into the database and the data was put back into the PL/SQL compiler memory structures and the compilation continued to generate the p-code and install into the database. One of the flaws of then Oracle 7 through to 9iR2 wrapped files was the fact that the symbol table was visible in the wrap file and whilst you could not re-construct a PL/SQL file from a wrapped file from the symbol table except in very limited cases this gave away a lot of detail of the protected PL/SQL. I showed back in 2001 that even wrapped (Oracle 7 to 91R2) that the wrapped file could be hacked by editing the symbol table to change its meaning BUT the PL/SQL still loaded and the code could then be hacked. The example I gave back then (I think, its a long time ago), was to change "..alter session set nls..." to "...alter user sys identified by a..." where the code was installed as SYS or a DBA or a user with ALTER USER this would change the SYS password and is an escalation to SYSDBA for the attacker. The attacker would of course need to have access to the PL/SQL wrapped files.

Then from Oracle 10.1 the wrap mechanism changed to stop the symbol table being visible and to do this the method of wrapping changed. In the 9iR2 the wrap file is the PL/SQL memory part way through compiling and in 10g and above its a pseudo obfuscation process on the PL/SQL file and not part of the compilation process. The PL/SQL file is simply protected by wrap.exe and then un-protected before loading into the database. This obviously shows that the wrapped file in 10g and above can be reversed. It has to be reversed to allow it to be loaded to the database and compiled. This means that unwrapping a 10g wrapped PL/SQL file is easy and there are free tools around and even websites that are available that can be used to unwrap this PL/SQL.

There are no public websites and unwrappers for 9iR2 and earlier PL/SQL

This means that the 9ir2 and earlier wrap format was in a sense better (apart from the symbol table being visible) as its harder to uwrap

OK, so why is all of this relevant now? Well because I get on average at least 1 or 2 emails a week sometime as many as 4 or 5 a week asking me if I can help retrieve PL/SQL as the person emailing me has lost the source code and they need to modernise the code or add features or whatever. Some are governments and major utilities as well as many SME companies.

The issue seems to be a lack of process years ago. The clear text source code has been lost maybe because the development project shut down and over time the only copy of the clear text code is lost or moved to backup tapes or....

If you use wrap.exe then ensure that its part of a process; i.e. develop clear text code, test, wrap, release to production. BUT ensure that a clear text copy of the code is preserved when the development stops and the development stack is removed in favour of new projects. Copy the code to CD, to paper, source repositories and store otherwise in 10 years, 15 years people will be asking people like me to help them get their PL/SQL code back

#oracleace #23c #21c #19c #plsql #wrapping #obfuscation

Review an Oracle Database for Security Issues

I recently released part one of a three part post about securing data in an Oracle database. That post was titled "Securing Insecure Oracle Databases" and can be read by following the link.

As we discussed in the first post there are two tracks that we can follow to secure data in an Oracle database. The first is to review the database for security issues and then fix them and therefore secure the data in the database. The second is to use some software or blocking tactics and prevent an attack from getting to the database. For the second option we also really need to know the current security state of the database before we can design and implemented rules and tools to prevent attacks. These could be Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Database Activity Monitoring (DAM) solutions and more. In essence these are barriers to stop attacks. These products/tools on their own cannot prevent all attacks and some must be prevented by good security design in the first place. For instance if you have granted SELECT, INSERT, UPDATE and DELETE to all users and with admin rights then SQL Injection is not needed if most staff have direct access to the database.

Before we can secure any database we need to first of all recognise that we are not securing Oracle (the database), we are securing data that is held - or not - in a database. That is the goal, to secure the data.

So before we can secure any database (data) we need to know the current state of the security so that we can assess what is right and what is wrong. We should do this by performing a detailed audit of a single relevant database. We do not need to audit every database at this point as the purpose is to get a clear picture of the current data security in a relevant database. That database should be production so that its relevant. If you audit a copy or a test system it is not used so there is no evidence of use or abuse of that database.

The next stage is to review any existing security standards, guides, baselines that exist in the company; ideally there would be an Oracle or database standard but its unlikely, most companies do not have one already. BUT, any existing overall security standard can feed into this process; particularly any access and use policy that stipulates rules around authentication, permissions etc

We should not be starting to fix anything at this stage BUT we must know what budgets we have in terms of time and money. If we create the most fantastic security policy with 200 counter-measures in it then its useless if we have only one day a week available for fixing for 6 weeks. It sounds like an oxymoron BUT its better to focus what you can do based on your budget; its realistic. If you create a 200 clause security standard and only have limited time and no one managing it; then the time will not be spent wisely. Its better to understand your budget and spend that time to improve the data security as much as possible for the budget that you have; and then next year or next project improve and improve again going forwards.

We must create a data security policy based on budget, time, what we found in the audit and existing standards. This can be a data security policy document but it should also be a baseline standard or even scripts to secure new databases. Put a stake in the ground and from this point forwards all new databases should be secured; in this way the security of data does not get worse. Also its easier to start to build secure databases as this can be part of the build process; this is easier than fixing existing databases.

Once you have a standard / policy for secure Oracle databases then you must go around and audit every database to test them against this standard.

We then must think about securing all of the existing databases. This also is not an easy task if you have hundreds or thousands of databases. This can be done manually but it would not be practical especially when database security often needs to be re-applied after lock down. This is often due to upgrades, patches and maintenance of the applications and database that resets security back to insecure settings.

We must plan fixing and strategies carefully.

At a high level securing a database consists of these groups of tasks; 1 - patching, 2 - hardening, 3 - actual data security. The actual data security can be broken into a number of sub-categories as well a) database access controls, b) user privileges, c) data access controls. Finally we can layer context-based-security; either home grown or products from Oracle such as Database Vault, TSDP, VPD, OLS etc

Securing data is expensive in time and effort; this is why data security should be part of every database project and planned in from day one and continued until the database is de-commissioned.

In the next post we will discuss the third part of this series and cover monitoring

#oracleace #23c #21c #19c #oracle #database #security #databreach #audit

Oracle security and ERP systems and ACE

First, before we get into the subject of the blog; I just received an email from the Oracle ACE program and I have been awarded Oracle ACE for another year until end of May 2024; hopefully more years after that will come also.
Pete Finnigan is ACE Pro for another year

Back to the brief topic of this blog.

I have audited many Oracle databases for customers over the last more than 20 years and quite a lot of these databases support ERP type systems such as SAP, PeopleSoft, JD Edwards, Oracle E-Business Suite and so on. Quite often these are the primary product used to support the business. When I conduct a security audit of an Oracle database I do the same level of audit for all types of database use. I have found that this conflicts with the ideas of security in ERP/Big systems. Over the years I see that security of EBS, JDE etc is considered at the level of the application. Often security teams and external consultants are looking at fraud, conflicts of interest, ERP privilege models and ERP security settings BUT there is often a lack of security awareness at the Oracle database level.

I have spoken a couple of times on this subject at the UKOUG and also a security conference in Slovenia.

This is interesting and we must ask why?

There are probably a lot of reasons why the Oracle database security is ignored or not paid the same attention as the ERP level security in companies:

  • Companies simply do not consider the security of the Oracle database as they have been conditioned to think that security of a large business/ERP/type system is done at the level of the ERP

  • Some companies have limited Oracle database staff as the database is installed and provided via the ERP. I see this in JD Edwards installs for instance where the Oracle database underpinning it is ignored.

  • Companies treat the Oracle database (and would presumably treat SQL Server or similar in the same manner) as a sealed unit not to be touched or tampered with

  • Companies are frightened to change the settings of an Oracle database supporting an ERP. I have seen this with SAP and EBS databases; indeed with SAP they provide their own database screens and tools to isolate the database. They feel if they change anything at all it could break the system for the business

  • Some companies feel the vendor is responsible for securing the database itself. In one sense I can see logic in this. If a database is provided as a sealed using to support an ERP then the vendor should be able to lock it exactly the same for all end customers but my experience in seeing these databases is a lack of security.

  • There is an underlying feeling that security of the Oracle database cannot be changed and should not be changed as its not the customers right/job/responsibility to change it

Bear in mind I am not an ERP expert; my focus is Oracle database security so my thoughts are based on what I have seen and been told at sites rather than being an expert in any ERP

If we don't consider the database there is a problem as often these security settings, roles, menus, responsibilities, separations due to organisations and more are stored in the database.

There is of course a gap. If the security of an ERP is enabled at the ERP level that security and its settings at the ERP level is inevitably stored in the Oracle database layer. If an attacker or rogue employee can access the security and change it outside of ERP controls and can of course also access data. A good example is JDE where there are more than 4000 tables in various schemas and all except two are granted INSERT, SELECT, DELETE and UPDATE to PUBLIC. Often database passwords also share a pattern. In EBS for instance Oracle provides tools to change all the database passwords in one fell swoop using pattern passwords where a password may be long but the variable part is small. If the pattern is known then guessing the variable part and cracking all passwords is much easier. These are just a small amount of examples

We have talked about database security here BUT the same logic and ideas also apply to the operating system for the same reasons.

ERP systems tend to have audit trail mechanisms built into the ERP layers; for instance in JDE or EBS there are last updated by, last updated when type columns on tables and also triggers can be used to generate before and after audit trails. If we want to know what goes on in the database directly and avoiding ERP security then we must use database level audit techniques. We can then audit access to security both of the database and the ERP and also audit access to the ERP settings and configuration.

The message here is that we must not ignore the security of the Oracle database in an ERP system as its often too easy for a direct database user to change security or access data outside of rules enforced in the ERP.

#oracleace #23c #19c #21c #erp #oracle #database #security #jde #jdedwards #ebusinesssuite

A Though Experiment - Application in the Root Container?

I had a call from a long time customer yesterday whilst I was travelling. They are no where near going to cloud yet even for some test/dev type databases. They have quite a lot of databases and around 80% fall into 4 or 5 categories / types of application and the rest are random one-off databases for various purposes.

They have a small amount of 9iR2, 11.2 and most are 12.2 or going to 19c or are already on 19c.

All of the existing databases are single instance and they are moving upgraded databases to single instance databases as well in 19c. Oracle deprecated non-CDB architecture in 12c and its de-supported in 21c and I think I read that its not possible in 23c to create a single instance database.

So, for now they are supported in 19c and have their single instance databases that look and feel like older databases. I have spoken to them in the past about multitenant architecture and said a lot of our customers are converting to single PDB in the root container. This customer does not like this as in effect (their words) they have to maintain two databases for every single previous database. They can of course use multitenant properly and have more than one database per container. I have suggested this and they still think that yes, they could have 3 PDBs in a CDB but that is still 1.3 databases per original database!

So, they asked the question. Can we install our application in the CDB and run it from there and in effect have a single instance database. This was not a new idea for me as I have thought about this maybe ten years ago when multitenant first came out and I knew that Oracle would make this a default architecture at some point. I considered this idea to put the application only in the root container as an option many years ago and whilst its a great thought experiment unless Oracle supports this then there is likely to be problems cropping up over time that need to be solved. I had a blog in my list of potential blogs for many years BUT as I was asked this yesterday its time to write this blog now!!

So, as a thought experiment, can we simply put the application in the root container and remove pluggable databases and treat the CDB/root as a single instance database? probably

There are issues of course. As a thought experiment I am not hindered by production and support. The first things that come to mind are that we as customers of the Oracle database are allowed to access the CDB and have COMMON rights and make settings, profiles, roles etc so we are intended by Oracle to be there. Can we create code (PL/SQL), tables, views etc in the root container? yes of course. Can we put a whole application there? well yes but there could be issues on naming and upgrade etc.

A DBA could for instance create tables and views legitimately in the CDB and some PL/SQL to maintain the PDBs and the CDB as part of that and that to be is fine. Maybe the DBA has some tools, utilities, some stats etc that he collected in each CDB; that is fine BUT can we put a whole home grown application or legacy application in the root container. We probably can BUT there will probably be a lot more problems. Oracle put APEX in the root container originally in 12c and application containers provide the idea that you install the application once in the application container making support faster and easier and then create a PDB in the application container for each end user / customer database maybe with customer data in the PDB and core data and code in the application container. So the principal exists to put the application in the root container or application container.

If we try and install a legacy application into the CDB the first problem is the naming of users and roles which start by default C##; so the application code and all interfaces and database schemas may need to change to add C##. If this is a third party vendor then they are not likely to help or support you.

We could use the undocumented parameter "_oracle_script=true" and install the application and this would allow use of non-C## names in the database BUT every user and role or profile or table or.... that you create is likely to have the ORACLE_MAINTAINED column set to YES as well as code. This can be a problem for the seed and upgrades particularly because of the ORACLE_MAINTAINED flag.

What about the common_user_prefix parameter:

[oracle@oel19cee ~]$ sqlplus / as sysdba

SQL*Plus: Release - Production on Tue Jul 11 10:14:00 2023

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release - Production

SQL> select sys_context('userenv','con_name') from dual;


SQL> sho parameter common

------------------------------------ ----------- ------------------------------
common_user_prefix string C##
unified_audit_common_systemlog string

What if we set this to a NULL string?

SQL> alter system set common_user_prefix='' scope=spfile;

System altered.


Re-start the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 830470160 bytes
Fixed Size 9140240 bytes
Variable Size 738197504 bytes
Database Buffers 75497472 bytes
Redo Buffers 7634944 bytes
Database mounted.
Database opened.

Check the parameter in the root container:

C:\Users\Pete>sqlplus sys/oracle1@// as sysdba

SQL*Plus: Release - Production on Tue Jul 11 10:37:03 2023

Copyright (c) 1982, 2021, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release - Production

SQL> sho parameter common

------------------------------------ ----------- ------------------------------
common_user_prefix string
unified_audit_common_systemlog string

Now create a user and role and table and see:

SQL> create user testtest identified by testtest;

User created.

SQL> create role testrole;

Role created.

SQL> grant create session to testtest;

Grant succeeded.

SQL> grant create table to testtest;

Grant succeeded.

SQL> grant unlimited tablespace to testtest;

Grant succeeded.

SQL> connect testtest/testtest@//

So we have a user TESTTEST that does not include C## and we have connected to the CDB and now create a table:

SQL> create table test (col1 number);

Table created.


So, lets confirm TESTTEST and TESTROLE are COMMON and that the table TEST is not ORACLE_MAINTAINED:

SQL> col username for a30
SQL> col common for a3
SQL> col oracle_maintained for a1
SQL> set lines 220
SQL> select username, common, oracle_maintained from dba_users where username like 'TEST%';

------------------------------ --- -


And the table:

SQL> col owner for a30
SQL> col object_name for a30
SQL> col oracle_maintained for a1
SQL> set lines 220
SQL> select owner,object_name,oracle_maintained from dba_objects where object_name='TEST';

------------------------------ ------------------------------ -


So, we can make objects and users and roles with non-C## names and without making them Oracle maintained. So its possible. This is a simplistic view and there could be other issues in putting an application only in the CDB/root

All privileges are then COMMON but that would not really matter if they only used the CDB as there would be no PDBs to use these rights in.

Some things have to be done in the CDB such as LogMiner so it is probably acceptable to use the CDB in my mind BUT complete testing would be needed and other issues resolved and the question of support from vendors or Oracle means that we cannot suggest this legitimately as an option but it is a nice thought experiment that I think might work for people who still want just one database in a multitenant architecture.

NOTE: This is a thought experiment and I am not advising anyone to do this in production;

Securing Insecure Oracle Databases

Protecting legacy applications and legacy Oracle databases is hard because of the application was not written in house or is third party custom written or a COTS package then the database design was not done by you and is not controlled by you. Legacy here in the sense that it is a packaged database designed years ago and looks like years ago but might be running in a 19c database now or even a 21c database.

When you (or anyone) designs a database and application then you design the code and the tables and views and data models and everything. Guess what? you are also supposed to design the hardening and security of the data in that application and database but inevitably this is never done as there is always a focus on functionality, performance and service levels.

Applications that I see that use an Oracle database also often treat the database as a simple box that they do not care about; To the customer it is just a blind data store and they (the customer/owner) feel it is someone else's security problem or they assume the designer made it secure without actually checking themselves whether the data is indeed secure.

A lot of applications using a database clearly do not take data security as seriously as I do. If a database and application already exists then there is usually no appetite to change the design or database or harden it; usually because of fears that the functionality will be broken or the performance degraded by adding security to the data itself.

This is made worse by a lack of security knowledge around Oracle databases or databases in general. There is also usually no business driver to spend money to secure an Oracle database unlike performance issues where there is a clear requirement to make the applications function properly to maintain the business.

We can summarise a number of problems to solve in our goals to drive towards secure data

  • Legacy and commercial databases don't usually exhibit much security at the data level

  • Lack of appetite to fix the security of the database that exists already for fear of breaking the database or applications or creating performance issues

  • Lack of specific Oracle data security knowledge in security teams or DBA teams

  • Vendors won't spend money as they can't get more money from their existing customers to fix their own products

  • Internally often there is no budget or money to drastically improve security of data as there is no business driver. An actual breach will change that of course but in advance of a breach; no chance!

How do we solve the problem of fixing legacy (existing) database security?

This is a major issue and at a simple level there are two options:

  1. Detection and Prevention: Locate all the security issues in the database and fix them so that the database and application are protecting the data that they process and store

  2. Detection and Prevention: Detect attacks as they approach the database - maybe from SQL Injection sourced in the application by an attacker or a SQL*Plus user connecting to multiple accounts or a support person making changes to the database in production, or??

As you can see both are really classed as detection and prevention. On one side we can review a database and find out the set of security issues and develop a cost effective plan to fix those issues and prevent an attack because the gaps are closed. On the other side we can detect threats as they happen in a database that has not been fixed and secured and report these and potentially also prevent them. In this case it would be via an intrusion detection and prevention system.

Interesting that both angles are really the same but work in completely different ways. In one case we spend a little money to understand the problems (review and vulnerability scanning) and a lot of money fixing the issues to prevent attacks and in the other we can spend a lot of money to purchase IDS/IPS/Database Firewall software and then configure and set up to track database activity; we also need to buy licenses and this is the ongoing cost also quite high. As we know Oracle have included a SQL Firewall in the database in 23c. My three part papers on the SQL Firewall are available.

Which is the best approach?

Fixing has low upfront cost and complexity and high back end costs. Blocking traffic and actions has medium up front cost and complexity and medium to high back end cost.

We can do both solutions of course!

In both cases neither is a one stop process that is done once. We need to revisit hardening and we also need to revisit threat detection. New knowledge, versions of applications and database and changes occur so we must constantly update our data security.

There are products and features available from third parties and Oracle themselves to help protect data in a database. Oracle for instance focuses on the addons such as Database Vault or VPD or Encryption, masking and more; third parties focus in providing products to help analyse and secure data in databases. Some third parties provide network scanning and IDS / IPS / Firewall technologies

What about the cloud?

A lot of companies are moving data and their Oracle databases to the cloud. Whilst this makes sense from a budget perspective and that the cloud infrastructure is probably better in security terms than some on site data centres the fact that a database is badly designed and insecure on-premise doesn't magically change when that database is moved to the cloud. A database no matter where it is must be secured at the database level in terms of securing the data that is held and processed.

The ideal goal in terms of securing existing databases and particularly those going to the cloud is to just click a button and it is then secure or select a service with no customer input to it's setup except click deploy/install and we are protected.

We are not there yet and these ideal goals are hard to achieve with a click of a button but we can analyse and secure databases and we can detect attacks and do the work to secure databases piecemeal. Anyone running an Oracle database can do this. The key is to create a plan and understand what you budget is and what you want to achieve and what you can achieve with your budget.

This is planned to part one of three articles. The next parts coming soon are:

  • Review an Oracle Database for Security Issues and establish what needs to be done to secure your data with a discussion on how you can harden your databases

  • Detecting Attacks Against an Oracle Database

#oracleace #23c #21c #19c #cloud #oracle #security #lockdown #databreach

Oracle Unified Audit Target Data

I had an email from an Oracle colleague a few days ago asking me a question about Oracle Unified Audit in 19c. He wanted to be able to track when someone with the DBA role accesses objects in a particular schema and no one else and also when someone accesses the same schema objects with SYSDBA, RSYSRAC, SYSOPER etc and no one else.

He wanted to know if there is any trick to set object audit in Unified Audit so that instead of "select on orablog.table" could we do "select on schema.*" or similar for all privileges he wants to track on the objects such as insert, update, delete, select and execute.

With this type of requirement we can quickly see that the unified audit "when clause" or "...with granted roles..." is not as flexible as we would like it to be. For instance it would be great to do "schema.*" as a rule and it would also be great to access the fields of the current audit record being written (for all records of course) so that we could say in the when clause "...current_object_owner=audit.owner we want..." for instance. This would be very flexible if we could do this.

In terms of the requirement we have the following points:

  • There is no audit object for schema.* - If there was this option it would be a simple case then to do "with granted roles DBA..."

  • We could audit “all” for ISDBA and post filter the audit trail to see only those actions relevant to the schema that we are interested in

  • We could audit “all” for “users with granted roles” DBA and post filter the audit trail to see only those actions relevant to the schema we are interested in

  • From the other side of thinking; we could instead audit all object actions such as “actions select” for the and post filter to see if the access was by a DBA or SYSDBA etc. We would also need to enable for DBA and also SYSDBA etc. We would need to individually enable for every object needed and privileges on those objects

  • The current “when clause” is extremely limited in its possibilities. If we audit all actions; “action select” for instance then it would be nice to access the current audit record being created in the “when clause” and then check if the schema of the object is in the list to limit the output BUT this is not possible

  • The sys_context('userenv','current_sql') would be useful to get the SQL and parse out the schema (**if it was listed in the SQL as it may not be even if we get the SQL!**) but this is not possible from unified audit, only from FGA

So, as with all things audit trail over the years; whether that is using Oracle Unified Audit or Oracle standard audit, or a commercial third party solution we must satisfy our own requirements and the businesses auditing needs, especially the security department requirements. We must not simply start with a simple list of shipped policies from someone else including from Oracle with its built in policies or simple lists or the CIS recommendations. These are not designed for your circumstances and you must design your audit events at a high level (in words) and then work out how you would implement these in whatever technical solution you decide.

The best solution using Unified Audit for this example would be that we could script the action list for all actions on all objects in the schema we want and combine with “users with granted roles” to limit to DBA" create a second policy with the same object list and use a when clause with SYSDBA, SYSOPER etc - i.e. sys_context('userenv','isdba') or simply audit everything as a system user such as SYDBA or SYSDG etc.

Audit trail design like everything in Oracle security ends up being a compromise.

#oracleace #dbsec #oracle #security #audit #events #databreach