A model of information and dominance


This link takes you back to the CMOL page. To run the applet you will need to have a Java Runtime Environment installed. The paper "Dynamics of Opinions and Social Structures" associated to the applet can be downloaded from arXiv:0708.0368.

74 agents connected by 100 links communicate with each other to acquire new information about agents they find interesting. What they find interesting is what their friends talk about. The node color between red and blue represents the relative interest in the nodes marked with red and blue halos and the Voting pattern shows this interest averaged over all agents. The node size represents other agents interest in the agent.

To get better access to new information, an agent can use the friend who provided her with the most recent information about the agent in question and establish a new connection: the friend has a friend that has even newer information about the interesting agent. The communication level and the interest allocation in the applet change the strength of the three components of the model (for details see here):
  • Communication with connected friends
  • New friend via old friend
  • Interest in specific information
Agents are not only interested in other agents, but also that other agents are interested in them: when interested agents work to get new information about a specific agent they will at the same time provide their information about other agents. Here we model two different strategies to engineer other agents' interests:
  • The media strategy M
  • The politician strategy P
An agent with the media strategy broadcasts information about itself. For each communication event anywhere in the system, randomly chosen agents convert together a fraction of the total interest memory in the system to M. Contrary, the politician uses its local network to persuade other agents and imposes its personality on the agent it talks to by converting a fraction of this agent's interest memory to P.

The strategy can be changed for two of the agents in the applet. The leftmost agent's strategy can be changed by the blue button "B(lue) strategy" and the rightmost agent's strategy can be changed by the red button "R(ed) strategy". The race between the two agents can be followed in the voting pattern graph. The two lines indicate the agents' proportionate interest in the red and the blue strategy relative to the total interest in red and blue. The two filled curves indicate the agent's total interest in the two strategic agents relative to the total interest in all agents (when no strategy is chosen, the voting pattern represent the interest in the agents with the halos).

To garner votes the strategic agents can also be antagonistic to their opponent. In this way they do not win random interest from the agents they communicate with, but interest that previosly was devoted to the opponent. This strategy has the prefix a- (weak) or A- (strong) and is very powerful. The weak antagonist can only directly win interest from the opponent, whereas discussions about the strong opponent by anyone always in first case eliminates interest about the opponent.

The networks can be laid out based on the degree of the agents, the attention they have among other agents, or how new information they have about other agents. Together they represent the three main components of the model in the list above.

To study a generated network in more detail, stop the simulation and move around the agents with the left mouse button by clicking and dragging. By repeatedly just clicking without dragging the nodes relax by spring forces. With the right button new links can be created or removed between agents. Pushing down the middle mouse button on a node and releasing it on another shows the communication pathway from the first to the second node.

The reset button "Mem" clears all agents memory but preserves the network structure and the reset button "Net" does the opposite.


This is a more technical description of the model not far from a pseudocode.
  • Memory Three vectors represent the memory of an agent. Two of the vectors have the same length as the number of agents N in the system to form a simple local map of information flow in the network: one vector contains the age of the information and the other the name of the friend that provided it as a pointer toward the information source. In this way every agent has an idea about in which direction any other agent is in the network as well as a proxy for its quality.
    The third vector can have any length, set by the scroll bar "Interest allocation" in the applet. This vector represents an agent's interest in other agents, by containing different proportions of the agents' names. We have chosen to reserve an agent's first N-1 elements, one to each of the other agents in the system. In this way every agents always have a finite interest in everyone. The remaining elements in the vector can be changed dynamically when the agents communicate with each other. Local interest allocation corresponds to a long vector with almost complete dominance of personal interest over the fixed "global" interest in everyone.
  • Communication By choosing a link randomly (communication constrained by links) or inversely proportional to the degree in its ends (communication constrained by nodes), we select two agents to talk to each other. One of the agents choose the subject of the conversation by randomly selecting an element, i.e. an agent proportional to its occurrence in the interest vector. The agents then compare how new information they have about the agent of interest and the one with the oldest information updates its memory: the age of the information is copied from the agent with the newest information and the pointer is updated to the friend that just provided the information.
    The two agents also update their interest vectors by randomly assigning a position to each other as well as the agent they talked about. This is where the politician strategy differs from normal agents and an agent communicating with a politician will assign more than one position to the politician (the number being set by the strength of the politician in the applet).
    An antagonistic politician, blue for example, attacks the other agent's interest in blue's opponent red when the assignment no longer is random, but purposefully to positions previously assigned to red.
  • Rewiring The agents strive to get better access to information about other agents they are interested in. One way is to communicate a lot, the other to shortcut the communication pathways. In a rewiring step, a randomly chosen agent chooses a random element in its interest vector, and thereby the corresponding agent proportional to how many time it occurs in the memory. The agent then goes to the friend that provided the most recent information about it to get information about where she in turn got the information from. By establishing a link to its friend's friend the agent has, if the information was correct and updated, made the information pathway one step shorter. To keep the number of links in the system balanced, we at the same time remove a link randomly.
The ratio between communication and rewiring in the system is an important parameter set by the "Communication level" in the applet.

We increment the age of all information the agents' have about each other after every L'th communication event, L being the number of links in the system. Because every agent has information with age 0 about itself, the age of the information about any agent gets older the further away it is from the agent in the network. However, if an agent is very popular, the information can travel far without getting much older.

In society one observes social groups with widely different music tastes, religious beliefs, and languages. They emerge and disappear on all scales from extreme subcultures to mainstream massculture. Several positive feedback mechanisms drive the diversity of beliefs in social systems. Some of these mechanisms can be analyzed in terms of a hugely simplified model of a dynamic network that incorporates basic feedback between information assembly through communication and formation of social connections.

Our model consists of a social network of agents, each having a memory. This individual memory is a simple local picture of where other agents are in the network together with a priority of relative interest in each agent. The agents communicate with other agents and modify their memory when they get new information about other agents. Based on this memory they also build new social connections to get better access to agents they find interesting.

The strong coupling between the agents' believes, the inner world, and their positions in the dynamic network structure, the outer world, has interesting consequences. The system can for example not be reset by either resetting the agents' inner or outer world. They have to be reset simultaneously, because otherwise information about the old system will be stored in the world that was not reset and enable a partial recovery of the system.

In the model, a social system on the size of a large school class is simplified into a number of agents. These agents form a network that dynamically adjusts itself to facilitate a hunt-gatherer behavior in information space, which in turn is reflected in a tribal organization of the evolving social network. This tribal organization is sensitive to information manipulation, as illustrated by influence of particularly convincing demagogues.

The model allows us to consider the impact of certain charismatic people. Thanks to their larger charisma, they can influence their fellow agents to think disproportionately more about them, or equivalently, about political objectives of which they are the main representative. Thus, our model allows for new analysis of the effects of celebrities, politicians or prophets in a social system.

Scenario 1: Consider the introduction of a single politician, or of several politicians or media persons. We find that the associated engineering of communication tends to streamline the social network into hierarchical structures around a celebrity center of fashionable persons.

Scenario 2: If two politicians garner votes with different strategies, one only by advocating for himself and the other with an antagonistic strategy to purposefully win votes from the other side, the antagonistic politician will do much better. The antagonistic strategy is so effective that it outcompetes a much stronger win-any-vote-strategy.

Scenario 3: Two competing antagonistic politicians, form a system where equal sharing of influence is unstable, in the sense that the system tends to choose one of the candidates on the cost of the other. In terms of biology or physics, the system develops bistability where a monoculture dominates for long periods of time. The state of a bistable system is historically dependent, determined on the few times in history where the two conflicting beliefs are of equal strength. It is tempting to compare persistent segregation in our model with the geographical segregation of religious beliefs in the real world.

Scenario 4: Consider half the population being liberal and half the population being conseravtive — who will people think about? To find out, press the button under Opinion stubbornness and half the population will update their interest 10 times faster (blue) than the other half (red).


Martin Rosvall and Kim Sneppen. "Opinion Formation and Social Structure", arXiv:0708.0368.