I agree with Penrose, Artificial intelligence is something that is not what neural networks are applying.



Considering the perceptron to be it's basic form, neural networks are not a model of a brain, neural networks are not a digitized brain or a digital brain like a virtual machine with a branching infraestructure like that of a brain, they are a flow diagram borrowed from a brain, usually non-human brains, borrowed for a modular assembly. Holistic integration like that of a Neocognitron is a plus that became standard for contemporary neural networks, Tensorflow experts could really explain this schemata for stages of algorithmic conversion and tell us if they remain linear in nature to some degree.


Nothing about the human brain is modular since there is holistic integration between parts, constellations and clusters inside parts of the brain and between different parts of it.

Godel's theorem has interesting fields to define in terms of interdisciplinary boundaries and tech infrastructure, for example quantum computation, which is supposed to be a transversal and omnidirectional version of the Neocognitron.

  • The limitations of inference to build actionable possible worlds (profiling, speculation, predictive design) 
  • Cosmology and epistemology as nodes in a somewhat traceable continuum

And this is probably a base conjecture for the design of self-regulation processes in metaheuristic algorithms. 

It implies requirements for the chain supply of data feedback and the training sets for automated model generation when considering a future of data sampling in a self-replicating industrial setting. Basically, a lifecycle and ecosystem for data in a world of augmented measuring.

How is this applicable in a small scale operation is beyond my current knowledge in infrastructure. Rather than the hype of Quantum computation, Second-order cybernetics may be a better fit for the dynamics Godel was calculating proof.

 https://en.wikipedia.org/wiki/Second-order_cybernetics

A framework for decentralized feedback loops and reliable, transparent and ethical data sourcing is got a lot of nasty obstacles in contemporary society, some of them related to ideology pulling sampling methods and survey options far from statistical trust indicators, and this is a technical problem related to corruption and sabotage in a foreign policy and warfare setting some people may choose to neglect.

This neglect is easy to notice in the business model of most AI startups, but more importantly, in it's Community Manager policies operating at Discord channels, not to be taken lightly since Populism is a K.O (knock-out) to verificationism. A deadlock against the scrutiny and expansionism of scientific indexation. With the social dimension of politics and AI even Open-Source protocols are also endangered, so the real-life use of Godel's theorem is far from being a possibility and very close to becoming what Penrose calls out as overly-optimistic triumphalism.

The obscure details of Penrose theories and his requirements for intelligence to be, although speculative in nature, are healthy in it's identification of computation as a rather simple calculation done in accelerated timelapse, maybe even an arithmetical process in a lot of ways. So not a "Myth" to separate an actual brain from a mockup of modular diagramation.

On the other hand, before having a robust framework for decentralized feedback and reliable mass-scale automation a lot of cyber-security protocols need an update, not in a sophisticated scenario, the updates must happen in our daily use in a very vulgar and mundane daily life for the average joe. 

Just consider Windows11 and it's fiasco. 

All of this is a blockade or deplatforming holding us grounded far from needing Godel in our lives and achieving an infrastructure level where complexity and specialization can tip its toes across the different pools of data production in a macrosocial way in which Artificial General Intelligence AGI can become a fact.

First, in the most localized and individual level, we may need a faster processor to aid our antivirus against AI-generated malware, such faster processor could be impossible without specialized cloud support similar in scalability to vaccination during pandemics or GPUs used by AI generators themselves. 

The Soviet Union, which should remain the public and explicit name of Russia, is ready to wreck any system with specialized jammers and data counterfeit which can insert, modify and downgrade any commercial personal computer. Bots doing this mass-scale would render all computers unusable if they ever stop caring about going public with the outcry of nerds and tech people everywhere.

Do you remember Hamas breaking into Israel? 

The computer world could suffer a similar strike to its operational capabilities and service providers whenever Russia decides to do it. Ransomwares are just the tip of the iceberg when mass overbuff, memory leaks and malware have a greater reach and commercial antiviruses are powerless even with a premium suscription to Bitdefender. 

Something Godel's theorem could be pointing at, in the context of AI and small scale operations, is the base assumption of data corruption everywhere, always, forever.

A new era of security frameworks and systematic reviews versus the power of "stacking the deck" with disruptive AI models piggybacked in our service provider's consumer products. End-users must learn counter-surveillance now as part of their daily skills and being "cherry picking" with falsifiability always in mind, almost like a crazy person, in the uncanny valley of AI generated content where the introduction of bias as the power to insert a format for things to be is increasingly becoming a concern.