Archivio per 6 giugno 2013

06
Giu
13

Applications in Agent-Based Computational Economics – Munich Personal RePEc Archive

See on Scoop.itBounded Rationality and Beyond

Abstract

A constituent feature of adaptive complex system are non-linear feedback mechanisms between actors. This makes it often difficult to model and analyse them. Agent-based Computational Economics (ACE) uses computer simulation methods to represent such systems and analyse non-linear processes.

The aim of this thesis is to explore ways of modelling adaptive agents in ACE models. Its major contribution is of a methodological nature. Artificial intelligence and machine learning methods are used to represent agents and learning processes in ACE models.

In this work, a general reinforcement learning framework is developed and realised in a simulation system. This system is used to implement three models of increasing complexity in two different economic domains. One of these domains are iterative games in which agents meet repeatedly and interact. In an experimental labour market, it is shown how statistical discrimination can be generated simply by means of the learning algorithm used. The aim of this model is mainly to illustrate the features of the learning framework. The results resemble actual patterns of observed human behaviour in laboratory settings. The second model treats strategic network formation. The main contribution here is to show how agent-based modelling helps to analyse non-linearity that is introduced when assumptions of perfect information and full rationality are relaxed. The other domain has a Health Economics background. The aim here is to provide insights of how the approach might be useful in real-world applications. For this, a general model of primary care is developed, and the implications of different consumer behaviour (based on the learning features introduced before) analysed.

See on mpra.ub.uni-muenchen.de

06
Giu
13

Applications in Agent-Based Computational Economics – Munich Personal RePEc Archive

See on Scoop.itBounded Rationality and Beyond

Abstract

 

A constituent feature of adaptive complex system are non-linear feedback mechanisms between actors. This makes it often difficult to model and analyse them. Agent-based Computational Economics (ACE) uses computer simulation methods to represent such systems and analyse non-linear processes.

The aim of this thesis is to explore ways of modelling adaptive agents in ACE models. Its major contribution is of a methodological nature. Artificial intelligence and machine learning methods are used to represent agents and learning processes in ACE models.

In this work, a general reinforcement learning framework is developed and realised in a simulation system. This system is used to implement three models of increasing complexity in two different economic domains. One of these domains are iterative games in which agents meet repeatedly and interact. In an experimental labour market, it is shown how statistical discrimination can be generated simply by means of the learning algorithm used. The aim of this model is mainly to illustrate the features of the learning framework. The results resemble actual patterns of observed human behaviour in laboratory settings. The second model treats strategic network formation. The main contribution here is to show how agent-based modelling helps to analyse non-linearity that is introduced when assumptions of perfect information and full rationality are relaxed. The other domain has a Health Economics background. The aim here is to provide insights of how the approach might be useful in real-world applications. For this, a general model of primary care is developed, and the implications of different consumer behaviour (based on the learning features introduced before) analysed.

See on mpra.ub.uni-muenchen.de

06
Giu
13

Are risk preferences dynamic? Within-subject variation in risk-taking as a function of background music

See on Scoop.itBounded Rationality and Beyond

This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference. –

See on econstor.eu

06
Giu
13

Are risk preferences dynamic? Within-subject variation in risk-taking as a function of background music

See on Scoop.itBounded Rationality and Beyond

This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference. —

See on econstor.eu




Time is real? I think not

giugno: 2013
L M M G V S D
« Mag   Lug »
 12
3456789
10111213141516
17181920212223
24252627282930

Commenti recenti

Lorenzo Bosio su Un testo che trascende le sue…

Inserisci il tuo indirizzo e-mail per iscriverti a questo blog e ricevere notifiche di nuovi messaggi per e-mail.

Segui assieme ad altri 1.160 follower

Latest Tweets

Errore: Twitter non ha risposto. Aspetta qualche minuto e aggiorna la pagina.


%d blogger hanno fatto clic su Mi Piace per questo: