We are continuing the discussion with the general information and overview of the Meduza Artificial Intelligence Solution we are developing. We understand that we are not disclosing any technical information as of yet. However, we are providing various interesting aspects of the real-life issues we will be solving with our invention — Meduza AI.
The previous post on Meduza AI available here.
Current Development Stage
We have developed a basic feasibility study and documented the commercial project concept with the detailed execution based on the findings, before 20th November 2016.
Currently, we are working on the commercial feasibility study that will progress into full-scale commercial platform development.
We are very confident in our Meduza AI system solution and have a very clear development path with a detailed execution plan. We have used our solution to apply for the 3 separate BRII initiative (Business Research and Innovation Initiative (BRII) | www.business.gov.au) and have already received a favourable response from them. With the basic Feasibility Study completed with enough detail to submit credible technical information for the fore-mentioned BRII Initiative, we have now started developing a commercial feasibility system level to allow us to prove the concept within the commercial environments.
Unlike the systems based on top of a primitive data layer, our eloquent database & application solution does not limit users to the original pre-existing databases, but in fact, extends the capabilities far beyond conventional ‘search and find’ features.
Our system actively utilises the next generation of technology that can be customised as per the user’s need. This then can be applied to a variety of communication and data filtering methodologies that lead to effective contextual system utilisation and performance of daily tasks across many departments in various domains.
Ultimately, our Meduza system self-adopts both structured and unstructured data and then validates it based on new knowledge acquired.
Our unique approach in this endeavour has led us to redesign and redefine lower-level objects and parameters in order to distinguish our database and make it more efficient. Our solution can easily plug into a multitude of different existing database formats, including publicly available data and information that needs to be digitally recognised and restructured to work within a certain objective. This ensures the intelligent development of an improved system structure by determining a dynamic hierarchy to define the information received.
Our solution has the additional benefit of being able to be implemented without any disruption to the existing systems.
The practical applications of our system are numerous and many. To illustrate this, let’s take the example of a student trying to write a paper on a new subject.
Before the student can begin writing his actual paper, he first needs to source relevant, accurate and up-to-date information that he can use to draw up a solid base for his assignment. To accomplish this, he goes to the library to research key points. He might spend a few hours running Google searches and making notes. He spends time reading journals or studies related to the topic, possibly sourcing statistics, analysing them and then drawing a possible conclusion(s) from them.
This planning stage could take him weeks before he finally has all the resources he needs to complete the project.
Now imagine, if instead of the weeks of research, he was able to jump straight to having all the resources he may require at his fingertips. Imagine that the information was already contextualised, pre-validated, and the most credible topics presented along with key points for each one presented for the student to begin writing the paper straight away. In such a situation, he would be able to complete his assignment quickly, with the most accurate and up to date information in the field, and move along to the next task faster.
Now imagine that instead of a student writing a paper, it was a large-scale organisation.
This can be a Pharmaceutical or Biotechnology Company with the R&D initiative or Government Organisation, or even Oil & Gas Exploration and Development Company analysing large internal company gathered the information or researching external data for the same matter.
It could even be a Cyber Security Organisation, and the amount of cost and resources it takes to constantly analysing reported threats, going through complaints, analysing security extortions and then inventing patches and safeguards to prevent the threat from spreading is time-consuming, costly, and takes a lot of resources before they can even move to the «fixing it» stage.
Now instead of having to go through the whole ordeal, what if all the information they needed was being collected in real-time, and the division was able to be proactive rather than reactive — which is currently the case with the Government Agencies – allowing them to respond to threats before the users even knew there was a problem. Additionally, what if this system had a comprehensive knowledge of old and current malware and viruses and the best ways that they have been dealt with in the past. Such a system would be able to learn and become more efficient as it expands its functionality based on constantly self-updating procedures. Its new-sophisticated functionality would utilise validation processes for the new information and be able to automate some instances, including the ability to recognise external data as well as pre-established internal organisational resources with continuously improving solutions that will be presented in a timely manner.