The new software promises to offer a much more advanced AI system that is capable of interacting with its environment and process data like humans.
Brain Simulator II, a software platform that demonstrates the emergence of Artificial General Intelligence (AGI) is claimed to be the next phase of AI. This new research project will help develop new algorithms for simulating biological neuron circuits coupled with high-level artificial intelligence techniques.
Brain Simulator II allows exploration of the diverse AI algorithms and thus develop an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modelling, planning, imagination and forethought.
Charles Simon, CEO of FutureAI and creator of Brain Simulator II said, “New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI.”
Current AI-based devices such as Alexa and Siri, are not capable of reasoning the user simply about time and space. But according to Simon, Sallie, the Brain Simulator’s artificial entity, possesses the ability to move, see, touch, smell objects and understand any speech within a simulated environment. It also has the competence to learn how to go through complex paths by associating with its corresponding landmarks.
The AI system can estimate distances and learn new words in connection with the surrounding environment – much like a toddler learning to speak.
Processes data like humans
All information collected by Sallie is kept in a Universal Knowledge Store. Then any situation-relevant information is used for comprehending objects that exist in a physical environment. The entire process is very much based on the human-like perception of different situations.
Brain Simulator II merges Neural Network with Symbolic AI techniques for creating an array of millions of neurons that are interconnected by a number of synapses.
“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon states. “As the modules are enhanced, progressively more intelligence will emerge.”
Further, any cluster of neurons can be collected as a “Module” and executed in any desired background programming. For example, Brain Simulator II can integrate neural network recognition techniques with symbolic AI software structures to efficiently create a relevant knowledge store.
Other features of Brain Simulator II include:
- Real-time processing of millions of neurons via a desktop computer.
- The ability to create networks from scratch or a library of function.
- More than 20 module types for speech, vision, and robotic controls.
- Universal Knowledge Store modules that are capable of storing any kind of information and synapses.
- The ability to write modules in any language supported by Microsoft’s .NET platform.
Brain Simulator II’s simple 2D and 3D simulators are simple and have a few “well-working” object types, possible attributes and relationships. These modules can be further expanded to learn new objects, attributes, and relationships as they are encountered within the environment.