Call: +44 (0)1904 557620 Call

Pete Finnigan's Oracle Security Weblog

This is the weblog for Pete Finnigan. Pete works in the area of Oracle security and he specialises in auditing Oracle databases for security issues. This weblog is aimed squarely at those interested in the security of their Oracle databases.

[Previous entry: "Oracles Java Patch"] [Next entry: "Secure Coding PL/SQL"]

Oracle Security Search Is Annoying and protecting PL/SQL code

This post if not specifically about Oracle Security but I got here because of Oracle security so i am going to talk about Oracle security first...:-)

I am working this morning on a proof of concept code for a security solution for a clients database; so i am creating code for a high level design i wrote a couple of months ago to now demonstrate that the custom Oracle Security feature will work in practice for them and that its protected from bypass - I might talk about the actual solution here generally later if the client is happy for me to do so or simply discuss some of the protection features I use as they are my IPR. The solution for them is a security feature I am designing for their database also with secure coding in mind. I am implementing a feature in PL/SQL that has some protections built in to stop the feature being bypassed; its got PL/SQL software license type features added not because the client wants software license features added but to prevent someone from selecting the code from the database and adding it to their own database to run it, play with it and try and break it. The license features also try and make sure it runs in the right context in the installed database; this is an area (secure code, PL/SQL IPR protection, PL/SQL software license features, context based security...) I am realy interested in at the moment and that i have done quite a bit of work with for clients in the last couple of years; hence i will talk about all of these things at the UKOUG SIG on October 10th and also on December 5th at the UKOUG conference.

So these protections and context based checks stop code from being broken or reverse engineered to allow the hacker to understand how the code and protections work and also these protections try and make sure the code only runs when it is supposed to and also wont run if installed in another database. Of course no protection will work forever when its protecting code in a database that someone could take and run or simply study somewhere else privately until they break it. The idea is to make it very hard and also to make it take so long that they will give up.

I have four layers of protection on top of the original code:

layer 0) PL/SQL Code - just normal code you and I write
layer 1) Add in license features and context based protection
layer 2) Obfuscate the code with out PL/SQL Obfuscator PFCLOBfuscate
layer 3) Wrap the obfuscated code with 9i Wrap
layer 4) protect the wrapped code with WrapProtect our tool that stops unwrappers from; well unwrapping the code

The obfuscation with PFCLOBfuscate makes the code hard to read and removes meaning but it also means that when the 9i Wrap is used the symbol table no longer gives away secrets in the underlying PL/SQL code. Its then not possible to simply modify the wrapped file directly to change a setting or check as it would also change functionallity elsewhere so breaking the code from running. Using 9i Wrap is also better as getting a working 9i unwrapper is harder and more importantly all the work i have done over the years understanding the PL/SQL wrap mechanism and unwrapping PL/SQL now becomes useful as I have worked out hundreds of ways to prevent all known 9i and earlier unwrappers from working. When these hundreds of ways are also randomised there are literally thousands of protections. The 9i wrap and the WrapProtect program doesnt have to be used of course but it adds that final layer of protection that makes it very hard for most people to steal your PL/SQL based IPR or to even try and run that code elsewhere or in a different context.

For a hacker to break the code they must find an unwrapper, defeat the unwrap protection, defeat the obfuscations and then defeat the license and context features.

OK, the reason I started this post was that during my work this morning I wanted to search for something in google related to this work. I did a search and i have noticed a really annoying feature about this more and more recently; the results are completely flooded with single domain names; whilst the actual pages may have some relevant data on the single domains, i can tell from just the snippits visible they are not relevant for me and I dont want pages and pages of results for one domain. I did a search in and and the first five results are for various pages on, the sixth result was my site, the seventh was actually not relevant at all for the search; then 3 results for old books and then 27 results for (yes 27 results for one domain, google never used to do this!!!) and then one for me again.

What use is this? over 30 results for in the first four pages of google results. I did this search in firefox; in IE, i get first 4 results for, then two pages for my site then the same pages as before in a different order and then pages of So not only are the results flooded by single domains they are different between firefox and IE, why?.

I then checked for the same search and the results are much more balanced. I did the same check in which is a great little search engine that gives that clean simple feel that google did many years ago; i really like it.

Come on google, give us back that great search and remove all the gimics like auto-complete which also annoys me. The current google algorithm must favour large sites with lots of back links otherwise how do we get pages and pages of results from strong sites instead of varied results from lots of sites. OK, in my search came back and according to google it has 31 million pages in its index, its a huge amount of pages that carry a massive collective weight and some are clearly relevant but i want balanced search results to find what i need not just pages of results from one site.

OK, google officionados are going to tell me to log in and taylor the search to not include the domains that I dont want to see or to do other trickery but I really don't want to log in to search. I just want plain vanila results everyone else gets. I don't see why I need to log in to get better results; bing and duckduck and othger search engines don't need me to do so to give balanced results.

Also I think that this has not passed others by as when i check search results in google webmaster tools I see a big difference to not that long ago. I used to see people clicking through on terms more where i ranked in the first page of results I now see click through for results where my site is pages and pages down in the results; this means in my opinion that people are probably looking deeper to get what they want instead of clicking the first page only as was previoulsy the norm.

I started to use and a while ago but still by habit use google but it annoys me for some results so i am tending towards the others instead but i have used google since the 90s.