We have been lucky throughout the pandemic to still get work and we are also focused on updating our training classes and also updating our software products.
We are working on a major update to PFCLScan our database security scanner. We are also working on updates to PFCLCode our product to analyse your PL/SQL for security coding issues. We are also adding updates to PFCLObfuscate our product to protect PL/SQL to make the product better.
But most importantly we are also close now to releasing our new product PFCLForensics - more details coming soon on this product. We are preparing the text and images for the product page on the website and also finishing testing and also adding some new features. The product at a high level has three main modes:
- Help manage the response to a breach of an Oracle database. We do this via a built in plan/check list that the incident responder can work through to ensure all the necessary steps are followed
- Live response. We have a number of built in policies that allow a user to create a project and then execute the plugins to pull live response data from the database or Unix/Linux server. We pull the most transient data first and in the right order. The product also supports the loading of files into it to include in the analysis. All of the evidence gathered for each project is checksummed and this is validated every time the project opens or on demand to ensure that it has not changed
- Forensic Analysis. We provide many tools and features to aid forensic analysis. The user can browse the live response and static response data and choose potential evidence and add it to a timeline. The user can add comments to each (or none) lines of data in the timeline. The data is automatically corelated and is also automatically viewable in a drill down graph and an absolute timeline graph so that the evidence can also be seen visually. Supporting evidence that is not necessarily part of the timeline of artefacts can be added to a "supporting evidence" timeline. The product also includes a word processor and as template for a report. Data can be added into the report as flat data or screenshots (built in).
There are many more features and I will show some of these very soon here and show more of how it works.
One area we have looked into is the fact that some DDL does not include a timestamp (Separate blog coming on this) for when it happened. For instance I issue the command "grant delete on orablog.credit_card to xx". The grant is stored in sys.objauth$ but there is no timestamp on this table to know when this grant was issued. We can get some (not reliable) idea from sys.obj$.mtime and use this in conjunction with the create date and the interface change date. But, this is not reliable as MTIME also changes if a compile is made for instance. Even worse each object only has one MTIME so its the last change not every change. The answer in forensics is add comprehensive audit trails to the database before a breach and you will have the evidence to use in a breach analysis. Most people don't have this audit trail BUT we can help with this; see PFCLATK which is a comprehensive audit trail that can be added in minutes to a database as a combination of our PL/SQL toolkit and consulting. We are working on PFCLATK and this will be added into PFCLScan as a separate product later this year to allow an interactive dashboard and also point and click admin of policy driven audit trails.
So, if you don't have an audit trail; what's left to answer when the grant occurred? redo is the only answer really. We should not dump redo to trace as this would affect the server during a forensic response BUT we can view the redo logs or archive logs as binary files and see the DDL. A simple strings command is not good enough as we don't get context. A complete redo block analyser is also not necessary. We have a block dumper included in PFCLForensics and we can dump block 0 and 1:
C:\backups\30_06_2020_3_9_14_1350\scanner\oscan\Release>bd -v -c bd.conf -x -b2 -i redo02.log -o redo.op
BD: Release 3.9.562.1453 - Alpha on Thu Jun 24 13:40:45 2021
Copyright (c) 2021 PeteFinnigan.com Limited. All rights reserved.
[2021 Jun 24 12:40:45] bd: Starting BD...
[2021 Jun 24 12:40:45] bd: Opening Output File [redo.op]
[2021 Jun 24 12:40:45] bd: Analysing BLOCK Input File [ redo02.log ]
[2021 Jun 24 12:40:45] bd: Process Hex dump
[2021 Jun 24 12:40:45] bd: Closing Output File [redo.op]
[2021 Jun 24 12:40:45] bd: Closing Down BD
C:\backups\30_06_2020_3_9_14_1350\scanner\oscan\Release>type redo.op
0x00000000: 00 22 00 00 00 00 c0 ff 00 00 00 00 00 00 00 00 ."..............
0x00000010: 67 c8 00 00 00 02 00 00 00 90 01 00 7d 7c 7b 7a g...........}|{z
0x00000020: a0 81 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0x00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
I am interested in this post about the values 7d 7c 7b 7a. David Litchfield in his paper "Oracle Forensics Part 1: Dissecting the Redo Logs" calls this value a "magic" number that Oracle uses to determine that this is indeed a valid redo log. Oracle can be running on little endian systems such as Linux on Intel or Big endian systems. Also the block size shown here as "00 02" which is little endian so is 0x0200 or 512 bytes. A reader of block 0 of the redo log can get the block size, the number of blocks but also use 7d 7c 7b 7a to determine the endianness of the file. i.e. if its stored in the order here its little endian; if its stored in reverse then its big endian.
Whether Oracle intended this or not we can use the first few bytes of a redo log block 0 to decide how to process it.
More blogs soon, I promise!