researchTrees .pdf

File information

Original filename: researchTrees.pdf

This PDF 1.5 document has been generated by LaTeX with hyperref package / pdfTeX-1.40.14, and has been sent on on 02/11/2016 at 22:36, from IP address 81.111.x.x. The current document download page has been viewed 293 times.
File size: 174 KB (10 pages).
Privacy: public file

Download original PDF file

researchTrees.pdf (PDF, 174 KB)

Share on social networks

Link to this file download page

Document preview

Semi-Markov Research Trees
An Exploration of Stochastic, Active Research Systems

February 22, 2014

In this proposal, I suggest that the research system in Limit Theory
is modelled after generalised semi-Markov processes, and that agents
within the game are able to undertake activities that affect the paths of
research taken and the rate at which these paths are explored.





Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Generalised Semi-Markov Processes . . . . . . . . . . . . . . . . .


Choosing Research Paths


Active Research


Research Queues



Research is an important aspect of gameplay within Limit Theory. This
mechanism simulates the development of practical knowledge of agents
within the game that can confer benefits, unlock new possibilities of construction and utilisation of ships and other items, and open up additional
avenues of gameplay.
During a discussion in the Limit Theory IRC channel, the developer
mentioned that his intention was to model research completion events as
Poisson distributions. This proposal builds on that idea and introduces
possibilities for how agents can control the research process more actively.
The beginning section of this proposal will provide the mathematical background necessary for the rest of the proposal to make sense. The next two
sections cover the main focus of the proposal and the last section covers
an optional extra piece of functionality for the research system that can be
implemented independently of the other discussed mechanics.

This section aims to give a succinct explanation of the mathematics that
the core of this proposal is based on. The concepts discussed here relate to
probability theory and statistics. The content of this section has been written
with assistance from some of the Simulation and Modelling lecture slides as
provided by Imperial College London[1].

Markov Processes
Markov processes are a class of stochastic transition systems. Each state in
the system may have several outgoing transitions which all have an associated rate parameter and ‘race’ each other to be the first to trigger. The time
spent in each space is therefore the minimum of a set of random variables.
In a Markov process, every transition rate in the system is exponentially
distributed, meaning that its probability density function and cumulative
distribution function are defined as shown below, where qi,j represents the


transition rate parameter from state i to state j.
PDF : f ( x ) = qi,j e−qi,j x ,

x ≥ 0, qi,j ≥ 0

CDF : F ( x ) = P( X ≤ x ) = 1 − e−qi,j x


These grant Markov processes the special property that they are memoryless—
the next state transition is determined exclusively by the current state. The
formal explanation for this is given below, where S(t) ∈ S is the state of the
process at time t and t1 < t2 < ... < tn .
P(S(tn ) = sn |S(tn−1 ) = sn−1 , ..., S(t1 ) = s1 )

= P ( S ( t n ) = s n | S ( t n −1 ) = s n −1 )


This special property of Markov processes make them ideal for modelling
systems such as thermodynamic processes in physics, the growth of copolymers in chemistry, and potentially research breakthroughs in Limit Theory.

Generalised Semi-Markov Processes
Generalised semi-Markov processes are a stochastic model for discrete-event
simulation. They each comprise:
• A set S of states (finite or countably infinite)
• A set of events E = {e1 , ...e N }
• E(s)—the set of events scheduled to occur in state s ∈ S
• p(s0 ; s, e)—the probability that the next state is s0 when event e occurs
in state s
• F ( x; s0 , e0 , s, e)—the distribution function used to set the clock for a new
event e0 when event e triggers a transition from s to s0 .
• r (s, e)—the rate at which the clock for event e ∈ E runs down in state s

• S0 , F0 —the distribution functions used to assign the initial state and
set the clocks for all active events in that state.
In GSMPs, state transitions are dependent upon the ticking down of associated clocks; the rates at which clocks tick down are in turn dependent upon
the state of the system. The event whose clock reaches zero first in any given
state triggers the transition out of that state. A clock setting function sets the
clocks for new events that are ‘scheduled’ during this transition.
The clocks of old events (events that are active in both the old and new
state) continue to run down as before, whereas the clocks of cancelled events
(events that were active in the old state but not the new one) are descheduled.
These three types of clocks are formally described below, where s is the old
state, s0 is the new state, E( x ) gives the set of events in state x, and e is the
event that triggered the transition from s to s0 :
New : E(s0 ) − ( E(s) − {e})


Old : E(s0 ) ∩ ( E(s) − {e})


Cancelled : ( E(s) − {e}) − E(s0 )


Figure 1 illustrates this concept diagrammatically.

Figure 1: A representation of transitions in GSMPs


The semi-Markov property of GSMPs is due to the fact that the next state
is determined by the current state and the event that just occurred, but the
clock-setting functions can be such that the inter-event times are no longer
exponentially distributed as with Markov processes, and therefore the state
age matters.

Choosing Research Paths
Research paths in Limit Theory should not be linear. Instead, performing
research on a node in the research system should allow for multiple other
nodes to potentially be reached, hence forming a branching tree structure.
Research trees should be modelled as generalised semi-Markov processes, where research items are treated as nodes and the connections between them as transitions; whenever an agent chooses to perform research
on a node, a clock-setting function is used whereby some probability distribution is sampled and used to set the clocks that are associated with each
transition from the selected research item to the research items that can follow from it. These sampled values should not be visible to the agent. After
the agent has confirmed that they want to perform research on a given node,
these clocks begin running down at a rate dependent upon the node being
researched and other external factors. When one of these clocks reaches
zero, the corresponding transition is triggered and the agent achieves a
‘breakthrough’ that adds the associated research item to their repository of
total theoretical knowledge.
There are hence two determinants of the time in which it would take for
a ‘breakthrough’ to occur for a specific research item:
• The initial value of the associated clock—this is a statically-determined
value that is based on the intrinsic properties of the research item, such
as its complexity and level. If the research item in question is Neuropsychology I, its associated probability distribution is likely going
to have a higher mean than for a research item such as Accounting I.
Similarly, a research item such as Electronics IV is likely going to have
an associated probability distribution with a higher mean than the one

for Electronics II. The higher the sampled values for research items,
the longer it will take for them to be researched, ceteris paribus.
• The clock run-down rate—this is a dynamically-determined value that is
based on a variety of factors, such as the intelligence of the researching
agent, the quality of the research equipment or modules used, and
other external factors that will be discussed later in this proposal. The
higher the performance of the research being performed, the faster the
associated clock will run down to zero, and the sooner and more likely
the ‘breakthrough’ through will occur.
Because the research system is modelled as a GSMP, the same concepts of
new, old and cancelled clocks apply:
• A clock is considered new if its corresponding terminal research item
becomes a possible research target and was not a possible research
target before the previous breakthrough occurred. These clocks will
be scheduled i.e. they will have an initial value assigned to them that
is sampled from a probability distribution.
• A clock is considered old if its corresponding terminal research item
was previously a possible research target and remains an available
research target after the previous breakthrough of some other research
item was achieved. These clocks retain their current value and continue to run down as before.
• A clock is considered cancelled if a research breakthrough occurred and
its associated terminal research item is no longer a possible research
target from the next item that is selected for research. These clocks are
Figure 2 illustrates how this proposal would work if an agent wished to
conduct research on a research item such as Biology II. Figure 3 shows how
further research could then be performed after the initial breakthrough of
Biology III is achieved, with new clocks represented by blue transitions, old
clocks represented by green transitions and cancelled clocks represented by
red transitions.

Genetics I

Genetics I

Biology III

Biology III

Biology II


Botany I

Botany I

Figure 2: An example of research being performed on Biology II

Genetics I

Genetics I

Genetics I

Genetics I


Biology III

Biology IV

Biology IV



Biology III



Ecology I

Ecology I

Biology II

Botany I

Botany I

Figure 3: An example of a second round of research being performed


Active Research
As described in the previous section, the particular breakthrough that occurs
during research and the time until it occurs are determined by both a static
and dynamic component. The dynamic component of research is based on
external factors, some of which should be controllable (to a degree) by the
agent performing the research. This allows for the possibility that agents
can engage in specific activities to influence the research process in some
interesting and desirable ways.
For this to work, we could imagine different activities and objects in the
game world as having different values for different science ‘components’,
which could be represented as a vector (Sci1 , Sci2 , ...Scin ); the names of these
components aren’t important, as they will not be visible to the agent. For
readability’s sake, we can call this a ‘science vector’. Every transition from
one research item to another in the research tree will then use a normalised
science vector to specify a ratio. This ratio determines how important a
particular science component is to a given research avenue. Whenever an
agent performs some action with scientific value associated with it, the
science components of the action are multiplied by the components of the
research path and the sum of this is used to affect the rate at which the
associated clock for that research path ticks down.

 


 

r (s, e) = 

 



By undertaking different activities, the agent can affect the rate at which
each clock for the possible research transitions tick down and therefore
influence which possible research breakthrough is actualised and at what
time. Since the clock values and research items are hidden to the agent, they
will need to gain an intuitive feel for how different kinds of activities will
affect the research rates of different research items. This necessitates that the
normalised research science vectors are designed intelligently; if one science

component (say, Sci5 ) is important to Botany I, it would make sense for it to
be an important component of Botany II or Agriculture I.
The particular gameplay behind active research has not been considered
in great deal within this proposal. However, some possibilities are discussed
One possibility for active research might be to perform analysis of celestial objects such as star systems or planets. Analysis of different stars and
planets may yield information that is valuable to different kinds of research.
This would have the advantage of encouraging the agent to explore and
could conceivably fit the feel of Limit Theory quite well, as it does not involve any considerable divergence from regular gameplay as might be the
case with minigames. However, it may be considered strange that analysis
of a star could in any way benefit research into Business Management I,
for instance, so this would likely need to be supplemented with other forms
of active research.
A second possibility might be the examination of different artifacts, that
is, technology built by other agents that is based on theoretical knowledge
that exceeds that of the agent in question[2]. For instance, performing
analysis on an advanced form of reactor that has been salvaged from a
wreck might be an activity that provides a high Sci4 component, which may
be very important to research paths related to energy production and a great
help to the agent if they are engaged in a form of research along those lines
at the time.
It would also be interesting to consider possibilities that allow the agent
to engage in research in the middle of transit between two distant points, as
this can help keep travel interesting if this agent happens to be the player[3].
However, these would most likely have to involve some kind of minigame
mechanic, which can be fun but must be designed to support the feel of
Limit Theory and not cause a jarring sense of discontinuity with the rest of
the gameplay.


Related documents

mdp paper
probability and cognition
limar synchronicity313
kiese matthias stylised facts on cluster policy

Link to this page

Permanent link

Use the permanent link to the download page to share your document on Facebook, Twitter, LinkedIn, or directly with a contact by e-Mail, Messenger, Whatsapp, Line..

Short link

Use the short link to share your document on Twitter or by text message (SMS)


Copy the following HTML code to share your document on a Website or Blog

QR Code

QR Code link to PDF file researchTrees.pdf