Edge of the Map

When we are at the edge of the map . What can advance map learning technologies gives us. Can Backward facing frameworks give us the scaffolding to a planetary civilization when actually there has never been a planetary civilization?

The development of advanced map learning technologies offers us the potential to explore the unknown and expand our understanding of the world beyond our current limitations. However, this requires us to confront the limitations of our backward-looking frameworks, which rely on linear logic models (LLMs), probabilities, and historical data to understand the present and predict the future.s

While these frameworks have served us well in the past, hthey also present significant limitations. They tend to reinforce existing biases and assumptions, preventing us from seeing beyond our current understanding and imagining new possibilities. Moreover, they often fail to account for the complexity and unpredictability of the world, leaving us vulnerable to unexpected disruptions and challenges.

To build a sustainable and equitable future, we must be willing to challenge these limitations and embrace new, more dynamic approaches to understanding the world. This means looking beyond our existing frameworks and acknowledging the inherent uncertainty and complexity of the world around us.

Advanced map learning technologies offer us a way forward, providing us with powerful tools for collecting and analyzing data and identifying patterns and connections that may not be immediately apparent. These technologies can help us to navigate uncharted territories, such as the depths of the ocean or the frontiers of knowledge, and to build a more accurate and detailed map of the world.

But to realize the full potential of these technologies, we must be willing to confront the limitations of our existing frameworks and embrace new ways of thinking about the world. This means being open to new ideas, challenging our assumptions, and recognizing the inherent uncertainty and complexity of the world. By doing so, we can build a better future that is worthy of the challenges and opportunities of the 21st century.

Obfuscating and smuggling externalities.

The concept of externalities is a crucial aspect of economics and social sciences. Externalities refer to the spillover effects of an economic activity on third parties who are not involved in the activity. These effects can be either positive or negative and may occur in various ways, such as pollution, congestion, noise, and other environmental or social impacts. In many cases, externalities are overlooked or not considered in the decision-making process, leading to a distortion of market outcomes and a suboptimal allocation of resources.

A $40 shirt is probably $240 when you include environmental costs and externalities

While it is true that the true cost of a product, such as a shirt, may exceed its market price due to the inclusion of environmental costs and externalities, it is difficult to provide a definitive figure without further information and analysis.

The environmental costs of producing a shirt may include the energy and resources required to extract and process the raw materials, the emissions and waste generated during production, and the impacts of transportation and distribution. These costs are often not reflected in the market price of the product, and are instead borne by society and the environment.

Similarly, externalities such as the social and health impacts of the production process, such as the working conditions of laborers and the impact of chemicals used in the production process, may also not be factored into the market price of the shirt.

One example of a game that involves obfuscating externalities is the classic game of Monopoly. In Monopoly, players compete to acquire properties and collect rent from other players. However, the game does not account for the externalities of property ownership, such as the impact of high rents on local communities or the displacement of low-income residents. In this way, Monopoly can be seen as promoting a narrow and incomplete view of the economic system, one that fails to account for the broader social and environmental impacts of economic activity.

Another example of a game that involves smuggling externalities is the game of resource extraction. In this game, players compete to extract resources such as oil, gas, or minerals from a common pool. However, the game does not account for the negative externalities of resource extraction, such as environmental degradation, social conflict, and economic inequality. By ignoring these externalities, the game encourages players to pursue a strategy of short-term gain at the expense of long-term sustainability and social welfare.

In both of these examples, the nature of the game is obfuscating and smuggling externalities. The games create a false sense of reality, one that ignores the real-world impacts of economic activity and encourages players to pursue strategies that may harm others. This is not to say that games are inherently bad or that they cannot be used to promote social welfare. However, it is important to recognize the limitations of games and to design them in a way that takes externalities into account.

To address the issue of obfuscating and smuggling externalities in games, there are several possible approaches. One approach is to incorporate externalities into the game design explicitly. For example, a game could require players to pay a penalty for each unit of pollution they generate or each low-income resident they displace. This would force players to consider the broader social and environmental impacts of their actions and would incentivize them to pursue strategies that promote long-term sustainability and social welfare.

Technological Objectivity Vs Intuition

The concept of technological objectivity refers to the idea that technology is neutral and objective, and therefore more reliable than human intuition. This idea is based on the assumption that technology operates without bias, emotion, or subjectivity, and is therefore better able to make accurate and unbiased decisions than humans.

However, this assumption is flawed. In many cases, the technological objectivity we defer to is actually worse than human intuition. This is because technology is not neutral or objective in the way that we often assume it to be.

One of the main problems with technological objectivity is that it is based on the idea that technology is free from human bias. However, the reality is that technology is designed and programmed by humans, and is therefore subject to the same biases and prejudices as humans. For example, facial recognition software has been shown to be less accurate in recognizing people of color, because it was trained on a data set that was biased towards white faces.

Another problem with technological objectivity is that it often fails to take into account the context and nuance of a situation. Technology operates on a set of rules and algorithms, which may not be able to account for the complexities and subtleties of human behavior. This can lead to incorrect or inappropriate decisions being made. For example, an algorithm used by a bank to determine credit scores may unfairly penalize people who live in certain neighborhoods or who have certain types of jobs.

In addition, technological objectivity can sometimes lead to a lack of accountability. When a decision is made by a human, it is possible to hold that person accountable for their actions. However, when a decision is made by a machine, it can be difficult to determine who is responsible for any negative outcomes that may result.

In contrast, human intuition is often better able to take into account the context and nuances of a situation. Humans are able to use their judgment, experience, and empathy to make decisions that are appropriate for a particular situation. While human intuition is certainly not perfect, it is often better than the technological objectivity that we defer to.

In conclusion, the idea of technological objectivity as a superior alternative to human intuition is flawed. Technology is not neutral or objective in the way that we often assume it to be, and can be subject to the same biases and prejudices as humans. Furthermore, technology may not be able to take into account the context and nuances of a situation, which can lead to incorrect or inappropriate decisions being made. While human intuition is certainly not perfect, it is often better than the technological objectivity that we defer to.

Intelligence Not Substrate Dependent

The concept of intelligence has fascinated humans for centuries. From ancient Greek philosophers to modern-day scientists, the idea of intelligence has been explored, debated, and studied extensively. While there is still much to be learned about the nature of intelligence, it is clear that intelligence is not substrate dependent.

Substrate dependence refers to the idea that intelligence is somehow tied to the physical material on which it is instantiated. This idea is prevalent in the field of artificial intelligence (AI), where the substrate is often seen as the hardware on which the AI is running, such as a silicon semiconductor or a biological neuron.

However, this view of intelligence is misguided. The physics of intelligence is no more about silicon semiconductors or neurotransmitters than the physics of flight is about feathers or aluminum. Just as flight can be achieved through a variety of means, from feathers to engines, so too can intelligence be instantiated in a variety of substrates.

The key to understanding this is to recognize that intelligence is not a property of the substrate itself, but rather emerges from the patterns of activity within the substrate. In the case of artificial intelligence, this means that intelligence arises from the patterns of activity within the hardware and software of the system, not from the substrate itself.

This is why we can build AI systems on a wide variety of substrates, from silicon semiconductors to biological neurons. In each case, the intelligence of the system arises from the patterns of activity within the substrate, not from the substrate itself. This is why AI researchers are exploring new substrates for AI systems, such as DNA, quantum computers, and even slime molds.

Similarly, the intelligence of biological organisms is not tied to the physical substrate of the brain. While the brain is certainly important for intelligence, it is not the only factor. Intelligence arises from the complex interactions between the brain, the body, and the environment.

In fact, recent research has shown that intelligence can even emerge in systems that do not have a traditional brain at all. For example, slime molds have been shown to exhibit intelligent behavior, such as solving mazes and finding the shortest path between two points. This demonstrates that intelligence is not tied to a specific physical substrate, but rather emerges from the patterns of activity within the system.

In conclusion, intelligence is not substrate dependent. Whether we are talking about artificial intelligence or biological intelligence, the key to understanding intelligence is to focus on the patterns of activity within the system, rather than the substrate itself. By recognizing this, we can explore new substrates for AI systems, and gain a deeper understanding of the nature of intelligence itself.

Manual Override

In today’s fast-paced world, we are constantly bombarded with information from various sources – the internet, social media, television, newspapers, magazines, and more. With the advent of technology, the amount of information available to us has grown exponentially. We have access to more information than ever before, and yet we find ourselves struggling to keep up with it all. This is because, as Herbert Simon, the Nobel Prize-winning economist, once said, “What information consumes is rather obvious: it consumes the attention of its recipients.”

Despite the advances in technology and the abundance of information, attention remains a manual override. While computers can process vast amounts of data in a short period of time, the human brain has limits to the amount of information it can process effectively.

Attention is a conscious effort to focus on a particular task or piece of information while filtering out distractions. It requires a deliberate effort to allocate mental resources, including working memory and cognitive control, to the task at hand. While technology can help us manage and organize information, it cannot replace the need for active attention and focus.

In fact, with the increasing number of distractions in the digital world, the need for manual override of attention has become even more critical. Social media notifications, email alerts, and other digital distractions can quickly derail our focus and consume our attention, leading to decreased productivity and increased stress levels.

Attention is a limited resource, and the more information we have access to, the more attention we need to allocate in order to process it. This means that a wealth of information can create a poverty of attention. We can become overwhelmed and find it difficult to focus on any one thing for a significant amount of time. We might find ourselves jumping from one source of information to another, constantly switching between different websites or apps, scrolling through social media feeds, and checking emails. This can lead to a lack of productivity, increased stress levels, and decreased overall well-being.

In order to cope with the overabundance of information sources that might consume our attention, we need to learn to allocate our attention efficiently. This means being selective about the information we consume, choosing only the most relevant and important sources. We should also learn to limit the amount of time we spend on each source of information, setting specific times for checking emails or social media, for example, and sticking to those times.

In addition to prioritizing tasks and information, it’s important to develop good information management habits. This might include using tools like bookmarks, folders, or tags to organize information, and setting up filters or rules to automatically sort incoming information into categories.

Ultimately, managing information and attention is about developing good habits and being mindful of our use of time and resources. By being selective about the information we consume, prioritizing tasks and information, and developing good information management habits, we can avoid the pitfalls of information overload and allocate our attention efficiently. In doing so, we can improve our productivity, reduce stress levels, and enhance our overall well-being.

As information continues to proliferate, it becomes increasingly important to focus on effective curation. Curation involves selecting, organizing, and presenting information in a way that makes it useful and accessible. In a world where the volume of information is overwhelming, curation can help us to prioritize and focus our attention on the most valuable and relevant information.

Another way to curate information is by taking advantage of tools and technologies designed to help us filter and manage information. Search engines, social media platforms, and other online tools can help us to find and organize information based on our specific needs and interests. Machine learning algorithms and other artificial intelligence technologies are also becoming increasingly effective at identifying patterns and relationships in large data sets, helping to identify trends and insights that might otherwise be hidden.

In addition to relying on tools and technologies, effective curation also involves developing our own skills and habits. This might include being selective about the information we consume, focusing on specific areas of interest, and taking breaks from information overload to recharge and refocus our attention.

The Engineers Plot

It is not uncommon to see engineers attempt to venture into fields beyond their expertise, such as philosophy or physics, only to struggle and sometimes even become crackpots. While there may be several reasons for this, one major factor is the agency drive hardwired into firmware, which leads them to see the world exclusively in terms of ways it can be improved.

Engineering is a field that is built upon the notion of finding solutions to problems. Engineers are trained to approach challenges by breaking them down into manageable components and applying their problem-solving skills to find solutions. This approach works exceptionally well in engineering, where there are specific problems to solve and tangible results to be achieved. However, this mindset can create difficulties when applied to other fields that do not operate in the same way.

When engineers attempt to apply their problem-solving skills to fields such as philosophy or physics, they may struggle to find tangible problems to solve. These fields often deal with abstract concepts and theories that do not have clear solutions. Instead, they require a more nuanced and complex approach, one that is not necessarily focused on fixing problems.

Moreover, the agency drive hardwired into firmware can make it challenging for engineers to step back and accept the inherent uncertainty and complexity of these fields. The drive to see the world in terms of ways it can be improved may lead engineers to oversimplify complex issues and try to find solutions where there may not be any. This can result in the development of flawed theories or the adoption of extreme viewpoints, which can, in turn, lead to becoming a crackpot.

Additionally, the agency drive can make it challenging for engineers to accept viewpoints that differ from their own. Engineers are trained to think in terms of objective facts and data, and they may struggle to accept subjective experiences or viewpoints that do not fit within their worldview. This can lead to a dismissive attitude towards other fields, such as philosophy or physics, which can prevent them from gaining a deeper understanding of these subjects.

It is important to note that while the agency drive can be a hindrance in fields beyond engineering, it is also a valuable asset. The ability to see the world in terms of ways it can be improved has led to countless advancements in technology and engineering. However, it is essential to recognize that this drive may not always be useful or applicable outside of the engineering field.

In conclusion, the agency drive hardwired into firmware is a significant factor in why engineers may struggle when venturing into fields such as philosophy or physics. While this drive is a valuable asset in engineering, it can create difficulties when applied to fields that do not operate in the same way. Engineers must recognize the limitations of their problem-solving skills and be open to the complexity and uncertainty inherent in other fields to avoid becoming crackpots.

The development of artificial intelligence (AI) is a prime example of a field that severely stresses the philosophical and physics aptitudes of engineers. AI involves complex and abstract concepts such as machine learning, natural language processing, and computer vision, which require a deep understanding of the underlying principles.

However, the same agency drive hardwired into firmware that makes engineers successful in their field can also be a hindrance in AI development. Engineers may focus too heavily on finding solutions to problems and improving AI systems without fully understanding the philosophical and physics implications of their actions.

Furthermore, engineers may use terms such as “agency” without fully understanding their meaning or implications. Agency refers to the ability of an agent, whether human or artificial, to act independently and make choices. In AI development, agency is a critical concept, as it relates to the ability of AI systems to learn and adapt to their environment. However, engineers must also recognize the philosophical and ethical implications of creating AI systems with agency and ensure that they align with societal values and norms.

Additionally, engineers must also consider the physics implications of AI development, particularly in terms of the computational power required. As AI systems become more advanced and complex, they require increasingly powerful computing systems, which can strain energy resources and have significant environmental implications.

To avoid becoming “crackpots,” engineers working in AI development must approach the field with a sense of curiosity and a willingness to understand the philosophical and physics implications of their actions. They must also recognize the limitations of their problem-solving skills and be open to new perspectives and ideas.

In conclusion, the development of AI is a field that severely stresses the philosophical and physics aptitudes of engineers. While the agency drive hardwired into firmware can be a valuable asset, it can also be a hindrance in AI development. Engineers must approach the field with a sense of curiosity and a willingness to understand the underlying principles to avoid becoming “crackpots.”

You make an excellent point that engineers working in AI development can reduce the concept of agency to cost functions, optimization, or goal-oriented intention without fully understanding its philosophical and physics implications. While these terms are essential to the development of AI systems, they are only a small part of the larger picture.

Agency is a complex and multifaceted concept that goes beyond the optimization of algorithms. It involves questions of free will, consciousness, and the ability to act autonomously. Engineers must be willing to explore these questions and understand the implications of creating AI systems with agency.

Furthermore, the obsession with the self-improvability of AI systems can be a hindrance in understanding the full potential and limitations of AI. While the ability to improve and optimize AI systems is critical, it is equally important to recognize that there are aspects of AI that are invisible in the frames of improvability.

AI systems may be able to optimize their performance based on specific metrics or goals, but they may not be able to understand the broader context of their actions or the consequences of their decisions. There are ethical and societal implications to the development of AI systems that go beyond their ability to improve themselves.

In conclusion, engineers working in AI development must recognize the limitations of the improvability frame and be willing to explore the broader philosophical and physics implications of their work. They must approach the development of AI systems with a sense of curiosity and a willingness to understand the complexities of agency and its implications. Only by doing so can we ensure that the development of AI aligns with our values and goals as a society.

“Underpants Gnomes” Political Economy

The Underpants Gnomes episode of South Park provides an amusing but thought-provoking look at the issue of political economy. The premise of the episode is that a group of gnomes has been stealing underpants from the residents of South Park as part of a grand plan to achieve profits. However, when asked about the second phase of their plan, the gnomes are at a loss to explain what it is. This sets up a humorous but insightful commentary on the sometimes haphazard logic of economic planning.

In the context of the episode, the boys from South Park are tasked with giving a presentation to voters explaining why they should prevent a large corporation, Harbucks, from opening up next to Tweek’s Coffee, a local establishment. The boys are passionate about their cause, but their arguments are ultimately undermined by the fact that they have no real plan beyond stopping Harbucks. In contrast, the gnomes have a plan, but it is so poorly thought out that it is essentially meaningless.

The Underpants Gnomes episode can be seen as a critique of the idea that profits can be achieved simply by collecting resources or capital without a clear understanding of how to turn them into a profitable enterprise. This is reflected in the gnomes’ plan, which hinges on collecting underpants without any clear idea of what to do with them. In this sense, the gnomes represent a kind of parody of economic planning, where the focus is on collecting resources rather than developing a clear strategy for turning them into profit.

At the heart of the Underpants Gnomes episode is the idea that economic success requires more than just collecting resources or capital. It requires a clear understanding of how to use those resources to create value and generate profits. This is true whether we are talking about a small business like Tweek’s Coffee or a large corporation like Harbucks. Without a clear plan and a strategy for turning resources into profits, even the most well-funded enterprise is likely to fail.

In conclusion, the Underpants Gnomes episode of South Park provides an entertaining but insightful commentary on the issue of political economy. By highlighting the importance of having a clear plan and strategy for turning resources into profits, it reminds us that economic success requires more than just collecting resources or capital. It requires careful planning, sound strategy, and a willingness to adapt and change as circumstances dictate. Whether we are talking about small businesses or large corporations, the lessons of the Underpants Gnomes are clear: without a clear plan and a sound strategy for turning resources into profits, success is likely to remain elusive.

Rant about [technology X] not supporting [corner case]”

Person: Ugh, I can’t believe Technology X doesn’t support this corner case scenario. It’s so frustrating!

Person 2: Hmm, have you tried my favorite technology Y? It’s completely unrelated to your corner case, but it might be worth checking out.

Person: Sure, I’m open to new options. What does technology Y do?

Person 2: Well, technology Y is a cutting-edge platform that’s designed to streamline workflows and increase efficiency. It’s highly adaptable and can be used in a wide variety of contexts. I know it’s not exactly related to your corner case, but I think it might be a good fit for your needs.

Person: Okay, that sounds interesting. Can you tell me more about how it works?

Person 2: Sure thing! Technology Y is a cloud-based solution that can be accessed from anywhere, at any time. It uses advanced algorithms to automate repetitive tasks, freeing up your time and allowing you to focus on more strategic work. Plus, it’s highly customizable, so you can tailor it to your specific needs.

Person: That sounds really promising! Do you think it could help me with my corner case?

Person 2: Well, technology Y isn’t specifically designed for your corner case, but it’s possible that it could be adapted to meet your needs. I’d be happy to put you in touch with some of our experts to see if we can come up with a solution.

Person: That would be amazing! Thank you so much for suggesting technology Y. I’ll definitely look into it further.

Withdrawal

The idea of withdrawing from an environment that goes against one’s principles is a concept that has been explored by philosophers throughout history. Plato, the ancient Greek philosopher, believed that it is better to suffer injustice than to commit it. Similarly, Confucius, the Chinese philosopher, emphasized the importance of maintaining one’s personal values and principles even in difficult situations. Both of these philosophers recognized the importance of staying true to oneself, even if it means withdrawing from a harmful environment.

Plato’s perspective on this topic can be found in his famous work, “The Republic.” In this work, Plato argues that justice is a fundamental aspect of a good society. He believed that it is better to suffer injustice than to commit it because committing injustice would cause harm to one’s character. According to Plato, a person’s character is their most important possession, and it should be protected at all costs. If a person finds themselves in an environment that goes against their principles and values, they should withdraw from it rather than compromise their character.

Similarly, Confucius emphasized the importance of maintaining personal values and principles, even in difficult situations. He believed that a person’s character is shaped by their actions and that one should always strive to act in a way that is consistent with their values. Confucius believed that if a person finds themselves in an environment that goes against their principles, they should withdraw from it in order to maintain their integrity.

In both Plato and Confucius’s philosophy, the importance of character and personal values is emphasized. According to them, compromising one’s character in order to fit in with a harmful environment is not worth it. It is better to withdraw from the environment and protect one’s integrity.

However, it is important to note that withdrawing completely from an environment may not always be feasible. In some situations, it may be necessary to stay in the environment in order to effect change or to protect others from harm. In such situations, it is important to find ways to maintain one’s personal values and principles while still working within the environment.

In conclusion, the idea of withdrawing from an environment that goes against one’s principles is a concept that has been explored by philosophers such as Plato and Confucius. Both of these philosophers believed that it is better to protect one’s character and integrity than to compromise them for the sake of fitting in with a harmful environment. While withdrawing completely may not always be feasible, it is important to find ways to maintain personal values and principles in any environment.

Plato:

  • “I would rather suffer anything than injustice.” (The Republic)
  • “A good man will not be any less good because he has made a mistake or two.” (Phaedo)

Confucius:

  • “A superior man is modest in his speech, but exceeds in his actions.” (Analects)
  • “When anger rises, think of the consequences.” (Confucian Analects)
  • “The man who moves a mountain begins by carrying away small stones.” (Confucian Analects)

These quotes demonstrate the emphasis that Plato and Confucius placed on maintaining personal values and integrity, even in challenging situations.