0%

Monte Carlo Methods

Unlike Dynamic Programming Methods, Monte Carlo Methods do not assume complete knowledge of the environment. MC only requires experience--sample sequences of states, actions, and rewards from actual or simulated interaction with an environment.

Monte Carlo Prediction

The idea underlies all Monte Carlo methods is that as more returns are observed the average should converge to the expected value. So we begin by considerng Monte Carlo methods for learning the state-value function for a given policy. A way to estimate the value of a state from experience is simply to average the returns observed after visits to that state. ## Play with Blackjack ENV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
env = BlackjackEnv()
# The observation is a 3-tuple of: the players current sum,
# the dealer's one showing card (1-10 where 1 is ace),
# and whether or not the player holds a usable ace (0 or 1).
def print_observation(observation):
score, dealer_score, usable_ace = observation
print("Player Score: {} (Usable Ace: {}), Dealer Score: {}".format(
score, usable_ace, dealer_score))

def strategy(observation):
score, dealer_score, usable_ace = observation
# Stick (action 0) if the score is > 20, hit (action 1) otherwise
return 0 if score >= 20 else 1

for i_episode in range(20):
observation = env.reset()
for t in range(100):
print_observation(observation)
action = strategy(observation)
print("Taking action: {}".format( ["Stick", "Hit"][action]))
observation, reward, done, _ = env.step(action)
if done:
print_observation(observation)
print("Game end. Reward: {}\n".format(float(reward)))
break

First-visit MC prediction algorithm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def mc_prediction(policy, env, num_episodes, discount_factor=1.0):
"""
Monte Carlo prediction algorithm. Calculates the value function
for a given policy using sampling.

Args:
policy: A function that maps an observation to action probabilities.
env: OpenAI gym environment.
num_episodes: Number of episodes to sample.
discount_factor: Gamma discount factor.

Returns:
A dictionary that maps from state -> value.
The state is a tuple and the value is a float.
"""

# Keeps track of sum and count of returns for each state
# to calculate an average. We could use an array to save all
# returns (like in the book) but that's memory inefficient.
returns_sum = defaultdict(float)
returns_count = defaultdict(float)

# The final value function
V = defaultdict(float)

# Implement this!
for i in range(num_episodes):
episode = []
state = env.reset()
while(True):
action = policy(state)
next_state, reward, done, _ = env.step(action)
episode.append((state, action, reward))
if done:
break
state = next_state

# Find all states the we've visited in this episode
# We convert each state to a tuple so that we can use it as a dict key
states_in_episode = set([tuple(x[0]) for x in episode])
for state in states_in_episode:
#print(state)
#print(10*'*')
# Find the first occurance of the state in the episode
first_occurence_idx = next(i for i,x in enumerate(episode) if x[0] == state)
# Sum up all rewards since the first occurance
G = sum([x[2]*(discount_factor**i) for i,x in enumerate(episode[first_occurence_idx:])])
# Calculate average return for this state over all sampled episodes
returns_sum[state] += G
returns_count[state] += 1.0
V[state] = returns_sum[state] / returns_count[state]
print(len(V))
print(first_occurence_idx)
print(states_in_episode)
print(episode)
return V

10000_Steps_No_Usable_Ace 10000_Steps_Usable_Ace 500000_Steps_No_Usable_Ace 500000_Steps_Usable_Ace

If you like my blog, please donate for me.