What could we use AI for in an Oracle security sense? the obvious choices that stand out are using AI to detect wrong doing in firewalls or audit trails or using AI to detect setup anomalies in configuration. Assuming the standard generative AI model does not have enough knowledge of these topics we could create our own model and in addition teach it these things or we could use RAG to input the right knowledge (usually specialist papers, manual etc) for these things BUT these things don't using exist in any quantity. We would need a manual that describes every type of attack and then also feed the audit trails or firewall logs to this augmented AI model.
So, yes its possible
The current AI that has burst onto the scene in the last few years from OpenAI or DeepSeek has happened because of two major factors
- The rise in availability of hardware to implement the models - graphics cards and large amounts of RAM
- The rise in the large amounts of data freely available
The hardware was helped along with games using graphics cards to do matrix calculations and vector calculations and from use on things like bitcoin mining and password cracking. The rise in data is because of the colossal growth of the internet; books being digital and many more sources of knowledge that ois now digital and freely available.

Many years ago back in 1991, I bought the above book about neural networks and also another C/C++ book that also implemented neural networks and TurboVision ( text based UI for DOS back in the Borland 3.1 development days ). The book above includes a chapter on WIZARD that was an early attempt to implement neural nets in RAM. Around the same time from 1992-1994 I also go into Fuzzy Logic and Genetic Algorithms.
For one assignment in one class of my degree i designed a system to control car wipers based on rain fall. Not the simple setting 1, 2, 3 and 4 of early mixed speed wipers. I designed it to have a water / rain detector use used fuzzy logic to decide how fast to tell the wipers to go, or not at all. It was implemented in MatLab only and not physically but worked in the testing of the software.
How did we get to the sudden growth of AI now with the generative models and reasoning models available today. The golden circle of the right hardware and data being available. If you look at the net then it states that chatgpt was trained on very large data sets including online, conversations and more and it was also paired with supervised learning - re-enforced - where the examples are provided the right answers.
The fact that these models most likely use very large data sets implies that the internet was spidered and web pages parsed and knowledge extracted. Makes sense.
Generative AI in the sense of directions, recipes, general knowledge as viewed by the general person is fine but if you play with these interfaces and ask very specific questions not supplemented by RAG data then the answers are less accurate or wrong.
There is also a second problem that we have all seen. The rise of AI generated things. Just as examples 1) today I saw a picture that looked like ancient South American carvings except the person imaged looked like a spaceman - I have seen genuine cases that could be interpreted loosely in this way BUT this example today was sitting there firing a machine gun, fake! 2) a picture today showed ancient architecture and more modern buildings BUT the people were the wrong scale for the doors, 3) yesterday I saw a picture of a prototype diesel locomotive in Doncaster works BUT the text stated that the name plate was missing and careful viewing showed a ghost steam engine partly drawn behind.
All these are fakes generated by AI.
Then we have the get rich quick market, web content and social media generation and more. I have seen lots of people touting how to create images, text, posts and more using chatgpt.
We do not know the accuracy of this fake data. The internet and the corpus of data is growing and being filled with AI and generated AI data. If the models learn or train from the internet and the internet gets corrupted with generated and fake data from AI then the training and learning is also compromised.
This is a big problem going forwards. Yes, generative AI is great but if its polluted can we trust it.
I think that AI will only get bigger and I can see it used in cases in Oracle security with the right data and inputs to learn. How will it perform against audit trails or firewall logs being generated in large quantities and very fast. Can AI read the data fast enough and act on it?
#oracleace #sym_42 #oracle #database #security #ai #generative #rag