The individual models, products, companies, and challenges aside, neurosymbolic AI is very powerful. Our SGM is incomplete, and unlike ChatGPT I have no intention of releasing it in the wild until it has been rigorously tested and proven to be safe and accurate, but we're also seeing dramatic acceleration for specific goals by DeepMind and increasingly many others across the board. So in response to one part of the discussion - a question by Keith - can humans intentionally design AIs (paraphrasing) to accelerate discovery? (if not evolution), I'm quite certain the answer is yes - we're already doing it. Unlike Wolfram - we don't focus on computation for the end user, in large part because Wolfram had already done the work and it's a massive undertaking beyond our resources that would be redundant. Similar to Wolfram in some respects, we started to develop our own language to accomplish these goals (by necessity), with an eye towards stronger security and improved efficiency. Our KOS (EAI OS) was the manifestation of the first half of the voyage, which realized the underlying theorem of KYield - yield management of knowledge, and the synthetic genius machine (SGM) is the manifestation of the second half of the voyage, which I first disclosed during a talk at the ExperienceITNM conference on 9/13/19, titled 'Metamorphic Transformation with Enterprise-wide Artificial Intelligence. The SGM is conceptually somewhat similar to the integration between Alpha and ChatGPT, in that it combines both semantic knowledge structure and neural network algorithms, but does so for very specific tasks - initially focused on accelerating discoveries by modeling patterns from the work of real human geniuses. The roughly second half since moving to NM, particularly in the first few years as a frequent visitor at SFI, was focused on symbolic AI and data physics. The first roughly half was dedicated to semantic languages and rules-based governance systems, consisting of considerable NLP and linguistics. For sake of simplicity and communication, we separate our R&D between two primary eras, though of course in practice it varies from day to day. This is an area that has been of particular interest for me over 25 years in R&D at KYield. It's one of the first large-scale examples of neurosymbolic AI for public use. Alpha was integrated with ChatGPT through plugins and released today, which was a historic event. Keith Duggar, initially focused on the confluence of LLMs (ChatGPT in this case), and mathematic computation with symbolic languages (Wolfram Alpha in this case). Nice wide ranging discussion with Stephen Wolfram by Tim Scarfe and Dr.
0 Comments
Leave a Reply. |