It is not uncommon to see engineers attempt to venture into fields beyond their expertise, such as philosophy or physics, only to struggle and sometimes even become crackpots. While there may be several reasons for this, one major factor is the agency drive hardwired into firmware, which leads them to see the world exclusively in terms of ways it can be improved.
Engineering is a field that is built upon the notion of finding solutions to problems. Engineers are trained to approach challenges by breaking them down into manageable components and applying their problem-solving skills to find solutions. This approach works exceptionally well in engineering, where there are specific problems to solve and tangible results to be achieved. However, this mindset can create difficulties when applied to other fields that do not operate in the same way.
When engineers attempt to apply their problem-solving skills to fields such as philosophy or physics, they may struggle to find tangible problems to solve. These fields often deal with abstract concepts and theories that do not have clear solutions. Instead, they require a more nuanced and complex approach, one that is not necessarily focused on fixing problems.
Moreover, the agency drive hardwired into firmware can make it challenging for engineers to step back and accept the inherent uncertainty and complexity of these fields. The drive to see the world in terms of ways it can be improved may lead engineers to oversimplify complex issues and try to find solutions where there may not be any. This can result in the development of flawed theories or the adoption of extreme viewpoints, which can, in turn, lead to becoming a crackpot.
Additionally, the agency drive can make it challenging for engineers to accept viewpoints that differ from their own. Engineers are trained to think in terms of objective facts and data, and they may struggle to accept subjective experiences or viewpoints that do not fit within their worldview. This can lead to a dismissive attitude towards other fields, such as philosophy or physics, which can prevent them from gaining a deeper understanding of these subjects.
It is important to note that while the agency drive can be a hindrance in fields beyond engineering, it is also a valuable asset. The ability to see the world in terms of ways it can be improved has led to countless advancements in technology and engineering. However, it is essential to recognize that this drive may not always be useful or applicable outside of the engineering field.
In conclusion, the agency drive hardwired into firmware is a significant factor in why engineers may struggle when venturing into fields such as philosophy or physics. While this drive is a valuable asset in engineering, it can create difficulties when applied to fields that do not operate in the same way. Engineers must recognize the limitations of their problem-solving skills and be open to the complexity and uncertainty inherent in other fields to avoid becoming crackpots.
The development of artificial intelligence (AI) is a prime example of a field that severely stresses the philosophical and physics aptitudes of engineers. AI involves complex and abstract concepts such as machine learning, natural language processing, and computer vision, which require a deep understanding of the underlying principles.
However, the same agency drive hardwired into firmware that makes engineers successful in their field can also be a hindrance in AI development. Engineers may focus too heavily on finding solutions to problems and improving AI systems without fully understanding the philosophical and physics implications of their actions.
Furthermore, engineers may use terms such as “agency” without fully understanding their meaning or implications. Agency refers to the ability of an agent, whether human or artificial, to act independently and make choices. In AI development, agency is a critical concept, as it relates to the ability of AI systems to learn and adapt to their environment. However, engineers must also recognize the philosophical and ethical implications of creating AI systems with agency and ensure that they align with societal values and norms.
Additionally, engineers must also consider the physics implications of AI development, particularly in terms of the computational power required. As AI systems become more advanced and complex, they require increasingly powerful computing systems, which can strain energy resources and have significant environmental implications.
To avoid becoming “crackpots,” engineers working in AI development must approach the field with a sense of curiosity and a willingness to understand the philosophical and physics implications of their actions. They must also recognize the limitations of their problem-solving skills and be open to new perspectives and ideas.
In conclusion, the development of AI is a field that severely stresses the philosophical and physics aptitudes of engineers. While the agency drive hardwired into firmware can be a valuable asset, it can also be a hindrance in AI development. Engineers must approach the field with a sense of curiosity and a willingness to understand the underlying principles to avoid becoming “crackpots.”
You make an excellent point that engineers working in AI development can reduce the concept of agency to cost functions, optimization, or goal-oriented intention without fully understanding its philosophical and physics implications. While these terms are essential to the development of AI systems, they are only a small part of the larger picture.
Agency is a complex and multifaceted concept that goes beyond the optimization of algorithms. It involves questions of free will, consciousness, and the ability to act autonomously. Engineers must be willing to explore these questions and understand the implications of creating AI systems with agency.
Furthermore, the obsession with the self-improvability of AI systems can be a hindrance in understanding the full potential and limitations of AI. While the ability to improve and optimize AI systems is critical, it is equally important to recognize that there are aspects of AI that are invisible in the frames of improvability.
AI systems may be able to optimize their performance based on specific metrics or goals, but they may not be able to understand the broader context of their actions or the consequences of their decisions. There are ethical and societal implications to the development of AI systems that go beyond their ability to improve themselves.
In conclusion, engineers working in AI development must recognize the limitations of the improvability frame and be willing to explore the broader philosophical and physics implications of their work. They must approach the development of AI systems with a sense of curiosity and a willingness to understand the complexities of agency and its implications. Only by doing so can we ensure that the development of AI aligns with our values and goals as a society.