Memory

Memory Handler Base Class

class econsimulacra.memory.base.ConsumptionHistoryItem(item_name, quantity, time, time_step)[source]

Bases: object

A class representing a consumption history item in the agent’s memory.

Parameters:
item_name

the name of the consumed item.

Type:

str

quantity

the quantity of the consumed item.

Type:

int | float

time

the time of the consumption.

Type:

int | str

time_step

the time step of the consumption.

Type:

int

Note

This history item is generated based on the ConsumptionLog. See also: econsimulacra.logs.base.ConsumptionLog, econsimulacra.envs.base.Environment._consume_items(agent_id, consumptions)

class econsimulacra.memory.base.MoveHistoryItem(pos, init_pos, time, time_step)[source]

Bases: object

A class representing a movement history item in the agent’s memory.

Parameters:
pos

the position of the agent after the movement.

Type:

tuple[int, …]

init_pos

the initial position of the agent assigned by the environment.

Type:

tuple[int, …]

time

the time of the movement. It can be None for the initial position assigned by the environment, which is based on the SpaceAssignLog.

Type:

int | str, optional

Note

This history item is generated based on the MoveLog and SpaceAssignLog. See also: econsimulacra.logs.base.MoveLog, econsimulacra.logs.base.SpaceAssignLog, econsimulacra.envs.base.Environment._move(agent_id, new_pos), econsimulacra.envs.base.Environment._assign_agent_to_space(agent_id, coords)

class econsimulacra.memory.base.PurchaseHistoryItem(item_name, quantity, price, time, time_step, from_agent_id)[source]

Bases: object

A class representing a purchase history item in the agent’s memory.

Parameters:
item_name

the name of the purchased item.

Type:

str

quantity

the quantity of the purchased item.

Type:

int | float

price

the price of the purchased item.

Type:

int | float

time

the time of the purchase.

Type:

int | str

from_agent_id

the id of the agent from whom the item is purchased.

Type:

int

Note

This history item is generated based on the OrderReactionLog where the agent is the purchase agent and the reaction is accept. See also: econsimulacra.logs.base.OrderReactionLog, econsimulacra.envs.base.Environment._process_reactions(agent_id, reactions)

class econsimulacra.memory.base.SaleHistoryItem(item_name, quantity, price, time, time_step, to_agent_id)[source]

Bases: object

A class representing a sale history item in the agent’s memory.

Parameters:
item_name

the name of the sold item.

Type:

str

quantity

the quantity of the sold item.

Type:

int | float

price

the price of the sold item.

Type:

int | float

time

the time of the sale.

Type:

int | str

to_agent_id

the id of the agent to whom the item is sold.

Type:

int

Note

This history item is generated based on the OrderReactionLog where the agent is the sale agent and the reaction is accept. See also: econsimulacra.logs.base.OrderReactionLog, econsimulacra.envs.base.Environment._process_reactions(agent_id, reactions)

class econsimulacra.memory.base.ExchangeHistoryItem(give_item_name, give_item_quantity, get_item_name, get_item_quantity, time, time_step, counterparty_id)[source]

Bases: object

A class representing an exchange history item in the agent’s memory.

Parameters:
give_item_name

the name of the item given in the exchange.

Type:

str

give_item_quantity

the quantity of the item given in the exchange.

Type:

int | float

get_item_name

the name of the item received in the exchange.

Type:

str

get_item_quantity

the quantity of the item received in the exchange.

Type:

int | float

time

the time of the exchange.

Type:

int | str

counterparty_id

the id of the agent with whom the exchange is made.

Type:

int

Note

This history item is generated based on the ProposalReactionLog where the reaction is accept. See also: econsimulacra.logs.base.ProposalReactionLog, econsimulacra.envs.base.Environment._process_reactions(agent_id, reactions)

class econsimulacra.memory.base.SetPriceHistoryItem(item_name, old_price, new_price, time, time_step)[source]

Bases: object

A class representing a price change history item in the agent’s memory.

Parameters:
item_name

the name of the item whose price is changed.

Type:

str

old_price

the old price of the item.

Type:

int | float

new_price

the new price of the item.

Type:

int | float

time

the time of the price change.

Type:

int | str

Note

This history item is generated based on the ChangePriceLog. See also: econsimulacra.logs.base.ChangePriceLog, econsimulacra.envs.base.Environment._set_price(agent_id, set_prices)

class econsimulacra.memory.base.SocialHistoryItem(action, target_agent_id, time, time_step, num_followers, num_follows)[source]

Bases: object

A class representing a social action history item in the agent’s memory.

Parameters:
  • action (Literal['follow', 'unfollow'])

  • target_agent_id (int)

  • time (int | str)

  • time_step (int)

  • num_followers (int)

  • num_follows (int)

action

the type of the social action.

Type:

Literal[“follow”, “unfollow”]

target_agent_id

the id of the target agent whom the agent follows or unfollows.

Type:

int

time

the time of the social action.

Type:

int | str

num_followers

the number of followers of the agent after the social action.

Type:

int

num_follows

the number of agents that the agent follows after the social action.

Type:

int

Note

This history item is generated based on the FollowLog and UnfollowLog. See also: econsimulacra.logs.base.FollowLog, econsimulacra.logs.base.UnfollowLog, econsimulacra.envs.base.Environment._act_in_social_network(agent_id, tweet, follow_agent_id, unfollow_agent_id)

class econsimulacra.memory.base.StateEvaluationItem(wealth, relative_wealth, buying_power, inventory_dic, persona_dic, time, time_step)[source]

Bases: object

A class representing a state evaluation item in the agent’s memory.

Parameters:
wealth

the wealth of the agent at the time of evaluation.

Type:

float

relative_wealth

The relative wealth of the agent at the time of evaluation. Only household agents have this value; for other agent types, it is None.

Type:

float, optional

buying_power

The buying power of the agent at the time of evaluation. Only household agents have this value; for other agent types, it is None.

Type:

float, optional

inventory_dic

the inventory of the agent at the time of evaluation.

Type:

dict[str, int | float]

persona_dic

the persona of the agent at the time of evaluation.

Type:

dict[str, Any], optional

time

the time of the state evaluation.

Type:

int | str

Note

This history item is generated based on the StateEvaluationLog. See also: econsimulacra.logs.base.StateEvaluationLog, econsimulacra.envs.base.Environment.evaluate_agent_state(agent_id)

class econsimulacra.memory.base.AgentMemory(consumption_history, move_history, purchase_history, sale_history, exchange_history, set_price_history, social_history, state_evaluation_history)[source]

Bases: object

Agent Memory class.

Store the history of the agent’s actions and observations in a summarized form. The memory is updated based on the logs generated by the environment.

Parameters:
consumption_history

the history of the agent’s consumption.

Type:

Deque[ConsumptionHistoryItem]

move_history

the history of the agent’s movement.

Type:

Deque[MoveHistoryItem]

purchase_history

the history of the agent’s purchase.

Type:

Deque[PurchaseHistoryItem]

sale_history

the history of the agent’s sale.

Type:

Deque[SaleHistoryItem]

exchange_history

the history of the agent’s exchange.

Type:

Deque[ExchangeHistoryItem]

set_price_history

the history of the agent’s price change.

Type:

Deque[SetPriceHistoryItem]

social_history

the history of the agent’s social actions.

Type:

Deque[SocialHistoryItem]

state_evaluation_history

the history of the agent’s state evaluations.

Type:

Deque[StateEvaluationItem]

Note

The history is stored in a deque with a maximum length of memory_length, which is defined in the MemoryHandler. When the history exceeds the maximum length, the oldest history will be removed.

class econsimulacra.memory.base.MemorySummarizer(config, prng=None, registered_classes=None)[source]

Bases: object

Memory Summarizer class.

MemorySummarizer is used to summarize the memory of the agent into a form that can be provided as a part of the observation to the agent.

Parameters:
sync_time(current_time, current_time_step)[source]

Synchronize the current time and time step in the summarizer with the MemoryHandler.

Parameters:
  • current_time (int | str)

  • current_time_step (int)

Return type:

None

class econsimulacra.memory.base.MemoryHandler(config, prng=None, registered_classes=[])[source]

Bases: object

Memory Handler class.

Parameters:
get_memory(agent_id)[source]

Summarize and return the memory of the agent with the given agent_id.

Parameters:

agent_id (int) – the id of the agent whose memory is to be retrieved.

Returns:

the summarized memory of the agent.

Return type:

dict[str, Any]

Note

Memory is provided as a part of the observation to the agent. econsimulacra.envs.obs_providers.MemoryProvider calls this method to get the memory of the agent. See also: econsimulacra.envs.obs_providers.MemoryProvider

The structure of the summarized memory is:

{
    "memory_length": int,
    "move_history": "(x0,y0) -> (x1,y1) -> (x2,y2)",
    "consumption_history": "item_name x quantity at time, ...",
    "purchase_history": "item_name x quantity at price from agent_id at time, ...",
    "sale_history": "item_name x quantity at price to agent_id at time, ...",
    "exchange_history": "give item_name x quantity, get item_name x quantity with agent_id at time; ...",
    "set_price_history": "item_name: old_price -> new_price at time, ...",
    "social_history": "follow target_agent_id at time (num_followers: N, num_follows: M); ...",
    "state_evaluation_history": "Wealth: wealth at time; ...",
}
update(log)[source]

Update memory based on the log. This method is called by the environment.

Note

See also:

econsimulacra.envs.base.Environment.remember_log(log: Log)

Parameters:

log (Log)

Return type:

None

Stress-Aware Summarizer

class econsimulacra.memory.stress_aware_summarizer.StressCalculator(config, prng=None, registered_classes=[])[source]

Bases: object

Stress Calculator class.

StressCalculator is a class that calculates the stress level of the agent based on the memory of the agent.

Parameters:
sync_time(time, time_step)[source]

Sync the current time and time step of the stress calculator.

Parameters:
Return type:

None

summarize_stress(field_name, history)[source]

Summarize the stress level based on the history.

Parameters:
  • field_name (str) – the name of the history field.

  • history (Deque[...]) – the history corresponding to field_name.

Returns:

the summarized stress text.

Return type:

str

class econsimulacra.memory.stress_aware_summarizer.StressAwareSummarizer(config, prng=None, registered_classes=[])[source]

Bases: MemorySummarizer

Stress Aware Summarizer class.

StressAwareSummarizer is a MemorySummarizer that summarizes the memory of the agent into a form that can be provided as a part of the observation to the agent, while being aware of the stress level of the agent.

Parameters:
sync_time(current_time, current_time_step)[source]

Synchronize the current time and time step in the summarizer with the MemoryHandler.

Parameters:
  • current_time (int | str)

  • current_time_step (int)

Return type:

None

Stress Calculation

econsimulacra.memory.stress_utils.calc_stress_from_consumption_history(consumption_history, current_time_step, max_stress, target_quantity, window_size, time_decay, tolerance_threshold, item2weight)[source]

Calculate the stress level based on the consumption history.

Parameters:
  • consumption_history (Deque[ConsumptionHistoryItem]) – A deque of ConsumptionHistoryItem representing the consumption history.

  • current_time_step (int) – The current time step in the simulation.

  • max_stress (int) – The maximum stress level.

  • target_quantity (int) – The target quantity to consume.

  • window_size (int) – The size of the time window in time steps to consider for stress calculation.

  • time_decay (float) – The decay factor for the stress contribution of past consumption events.

  • tolerance_threshold (float) – The tolerance threshold for stress.

  • item2weight (dict[str, float]) – A dictionary mapping item names to their corresponding weights for stress calculation.

Returns:

A tuple containing:

  • stress_level: The calculated stress level.

  • stress_reason: The reason for the stress level.

Return type:

tuple[int, str]

Note

The stress level is calculated based on the quantity of items consumed within the specified time window, weighted by their respective weights, and decayed over time.

The weighted quantity is defined as:

\[Q(t) = \sum_{k: t_k \in [t-W, t]} \gamma^{(t - t_k)} \, w_{i_k} \, q_k\]

where:

  • \(t\) is the current time step

  • \(W\) is the window size

  • \(\gamma\) is the time decay factor

  • \(w_{i_k}\) is the weight of item \(i_k\)

  • \(q_k\) is the consumed quantity at time \(t_k\)

The stress level is then computed as:

\[s(t) = \min\left( s_{\max}, \left\lfloor \frac{|Q(t) - Q^*|}{Q^*} \, s_{\max} \right\rfloor \right)\]

where \(Q^*\) is the target quantity.

Corner cases:

  • If consumption_history is empty and \(t \geq W\), then \(s(t) = s_{\max}\).

  • If consumption_history is empty and \(t < W\), then \(s(t) = 0\).

econsimulacra.memory.stress_utils.calc_stress_from_move_history(move_history, current_time_step, max_stress, target_distance, window_size, time_decay, tolerance_threshold, home_comfort)[source]

Calculate the stress level based on the move history.

Parameters:
  • move_history (Deque[MoveHistoryItem]) – A deque of MoveHistoryItem representing the move history.

  • current_time_step (int) – The current time step in the simulation.

  • max_stress (int) – The maximum stress level.

  • target_distance (float) – The target distance to move from the initial position.

  • window_size (int) – The size of the time window in time steps to consider for stress calculation.

  • time_decay (float) – The decay factor for the stress contribution of past move events.

  • tolerance_threshold (float) – The tolerance threshold for stress.

  • home_comfort (float) – The comfort level of being at home.

Returns:

A tuple containing:

  • stress_level: The calculated stress level.

  • stress_reason: The reason for the stress level.

Return type:

tuple[int, str]

Note

The stress level is calculated based on the distance moved during the time steps within the specified time window, decayed over time, and adjusted by the home comfort level.

The distance moved is defined as:

\[D(t) = \sum_{k: t_k \in [t-W, t]} \gamma^{(t - t_k)} \, \|x_k - x_{k-1}\|\]

where:

  • \(t\) is the current time step

  • \(W\) is the window size

  • \(\gamma\) is the time decay factor

  • \(\|x_k - x_{k-1}\|\) is the distance moved at time \(t_k\)

The stress level is then computed as:

\[s(t) = \min\left( s_{\max}, \left\lfloor \frac{|D(t) - D^*|}{D^*} \, s_{\max} \, (1 - h)\textbf{1}(x_t=x_0) \right\rfloor \right)\]

where \(D^*\) is the target distance and \(h\) is the home comfort level.

Corner cases: - If move_history is empty and

\(t \geq W\), then \(s(t) = s_{\max}\).

  • If move_history is empty and \(t < W\), then \(s(t) = 0\).

econsimulacra.memory.stress_utils.calc_stress_from_state_evaluation_history(state_evaluation_history, current_time_step, max_stress, target_buying_power, target_relative_wealth, target_wealth_growth, window_size, tolerance_threshold, buying_power_weight=1.0, relative_wealth_weight=1.0, wealth_drawdown_weight=1.0)[source]

Calculate economic stress based on state evaluation history.

Parameters:
  • state_evaluation_history (Deque[StateEvaluationItem]) – A deque of StateEvaluationItem representing the agent’s economic state history.

  • current_time_step (int) – The current time step in the simulation.

  • max_stress (int) – The maximum stress level.

  • target_buying_power (float) – The target buying power level.

  • target_relative_wealth (float) – The target relative wealth level.

  • target_wealth_growth (float) – The target wealth growth over the window.

  • window_size (int) – The size of the time window in time steps.

  • tolerance_threshold (float) – The threshold above which stress is reported.

  • buying_power_weight (float) – Weight for buying-power stress.

  • relative_wealth_weight (float) – Weight for relative-wealth stress.

  • wealth_drawdown_weight (float) – Weight for wealth-drawdown stress.

Returns:

A tuple containing:

  • stress_level: The calculated stress level.

  • stress_reason: The reason for the stress level.

Return type:

tuple[int, str]

Note

The stress is computed from three economic factors:

  1. Buying-power stress: stress from having insufficient purchasing power.

  2. Relative-wealth stress: stress from having less wealth than others.

  3. Wealth-drawdown stress: stress from recent decreases in wealth.

The buying-power stress is defined as:

\[s_{bp}(t) = \max\left( 0, \frac{B^* - B(t)}{B^*} \right)\]

where \(B(t)\) is buying power and \(B^*\) is the target buying power.

The relative-wealth stress is defined as:

\[s_{rw}(t) = \max\left( 0, \frac{R^* - R(t)}{|R^*| + 1 + \epsilon} \right)\]

where \(R(t)\) is relative wealth and \(R^*\) is the target relative wealth.

The wealth-drawdown stress is defined as:

\[s_{dd}(t) = \max\left( 0, \frac{G^* - G(t)}{|G^*| + w_i(0) \epsilon} \right)\]

where \(G(t)\) is the wealth change over the window, \(w_i(0)\) is the initial wealth, and \(G^*\) is the target wealth growth.

The total stress score is:

\[S(t) = \alpha s_{bp}(t) + \beta s_{rw}(t) + \delta s_{dd}(t)\]

and the final stress level is:

\[\min\left( s_{\max}, \left\lfloor S(t) s_{\max} \right\rfloor \right)\]