def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 6: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking(callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" # grab all factors where we know the evidence variables (to reduce the size of the tables) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) # join all factors by variable for joinVariable in eliminationOrder: currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, joinVariable) if len(joinedFactor.unconditionedVariables()) > 1: eliminateFactor = eliminate(joinedFactor, joinVariable) currentFactorsList.append(eliminateFactor) fullJoint = joinFactors(currentFactorsList) return normalize(fullJoint)
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" unconditioned = evidenceDict.keys() new_domain = bayesNet.getReducedVariableDomains(evidenceDict) new_factor = Factor(queryVariables, unconditioned, new_domain) for x in range(0, numSamples): assignment_dict = {} sample = [1] linearized_var = bayesNet.linearizeVariables() for var in linearized_var: var_cpt = bayesNet.getCPT(var) if var in unconditioned: assignment_dict[var] = evidenceDict[var] sample.append(var_cpt.getProbability(assignment_dict)) else: assignment_dict[var] = sampleFromFactor(var_cpt, assignment_dict)[var] prob = reduce(lambda x, y: x*y, sample) new_factor.setProbability(assignment_dict, new_factor.getProbability(assignment_dict) + prob) new_factor = normalize(new_factor) return new_factor
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) new_f = Factor(queryVariables, evidenceDict.keys(), bayesNet.getReducedVariableDomains(evidenceDict)) for i in range(numSamples): w = 1 cur = dict() for variable in bayesNet.linearizeVariables(): if variable in evidenceDict: cur[variable] = evidenceDict.get(variable) w *= bayesNet.getCPT(variable).getProbability(cur) else: cur.update(sampleFromFactor(bayesNet.getCPT(variable), cur)) new_f.setProbability(cur, w + new_f.getProbability(cur)) new_f = normalize(new_f) return new_f
def inferenceByEnumeration(bayesNet, queryVariables, evidenceDict): """ An inference by enumeration implementation provided as reference. This function performs a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. """ callTrackingList = [] joinFactorsByVariable = joinFactorsByVariableWithCallTracking( callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) # initialize return variables and the variables to eliminate evidenceVariablesSet = set(evidenceDict.keys()) queryVariablesSet = set(queryVariables) eliminationVariables = (bayesNet.variablesSet() - evidenceVariablesSet) - queryVariablesSet # grab all factors where we know the evidence variables (to reduce the size of the tables) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) # join all factors by variable for joinVariable in bayesNet.variablesSet(): currentFactorsList, joinedFactor = joinFactorsByVariable( currentFactorsList, joinVariable) currentFactorsList.append(joinedFactor) # currentFactorsList should contain the connected components of the graph now as factors, must join the connected components fullJoint = joinFactors(currentFactorsList) # marginalize all variables that aren't query or evidence incrementallyMarginalizedJoint = fullJoint for eliminationVariable in eliminationVariables: incrementallyMarginalizedJoint = eliminate( incrementallyMarginalizedJoint, eliminationVariable) fullJointOverQueryAndEvidence = incrementallyMarginalizedJoint # normalize so that the probability sums to one # the input factor contains only the query variables and the evidence variables, # both as unconditioned variables queryConditionedOnEvidence = normalize(fullJointOverQueryAndEvidence) # now the factor is conditioned on the evidence variables # the order is join on all variables, then eliminate on all elimination variables #print "callTrackingList: ", callTrackingList return queryConditionedOnEvidence
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 7: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" linearVars = bayesNet.linearizeVariables() # queryVariables are unconditional variables newFactor = Factor(queryVariables, evidenceDict.keys(), bayesNet.getReducedVariableDomains(evidenceDict)) #keys of evidenceDict are the conditional vars for sample in range(numSamples): weight = 1 assignment = dict() for variable in linearVars: if variable in evidenceDict: assignment[variable] = evidenceDict.get(variable) weight *= bayesNet.getCPT(variable).getProbability(assignment) else: assignment.update(sampleFromFactor(bayesNet.getCPT(variable), assignment)) newFactor.setProbability(assignment, weight + newFactor.getProbability(assignment)) return normalize(newFactor)
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) newFactor = Factor(queryVariables, evidenceDict.keys(), currentFactorsList[0].variableDomainsDict()) for _ in range(numSamples): w = 1.0 conditionedAssignments = {} for variable in bayesNet.linearizeVariables(): if variable in evidenceDict: conditionedAssignments[variable] = evidenceDict[variable] w = w * bayesNet.getCPT(variable).getProbability(conditionedAssignments) else: conditionedAssignments.update(sampleFromFactor(bayesNet.getCPT(variable), conditionedAssignments)) newFactor.setProbability(conditionedAssignments, w + newFactor.getProbability(conditionedAssignments)) return normalize(newFactor)
def inferenceByEnumeration(bayesNet, queryVariables, evidenceDict): """ An inference by enumeration implementation provided as reference. This function performs a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. """ callTrackingList = [] joinFactorsByVariable = joinFactorsByVariableWithCallTracking(callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) # initialize return variables and the variables to eliminate evidenceVariablesSet = set(evidenceDict.keys()) queryVariablesSet = set(queryVariables) eliminationVariables = (bayesNet.variablesSet() - evidenceVariablesSet) - queryVariablesSet # grab all factors where we know the evidence variables (to reduce the size of the tables) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) # join all factors by variable for joinVariable in bayesNet.variablesSet(): currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, joinVariable) currentFactorsList.append(joinedFactor) # currentFactorsList should contain the connected components of the graph now as factors, must join the connected components fullJoint = joinFactors(currentFactorsList) # marginalize all variables that aren't query or evidence incrementallyMarginalizedJoint = fullJoint for eliminationVariable in eliminationVariables: incrementallyMarginalizedJoint = eliminate(incrementallyMarginalizedJoint, eliminationVariable) fullJointOverQueryAndEvidence = incrementallyMarginalizedJoint # normalize so that the probability sums to one # the input factor contains only the query variables and the evidence variables, # both as unconditioned variables queryConditionedOnEvidence = normalize(fullJointOverQueryAndEvidence) # now the factor is conditioned on the evidence variables # the order is join on all variables, then eliminate on all elimination variables #print "callTrackingList: ", callTrackingList return queryConditionedOnEvidence
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" # create a conditionedAssignments dict conditionedAssignments = {} #print conditionedAssignments #print bayesNet.getCPT() variableList = bayesNet.linearizeVariables() evidenceList = set(evidenceDict.keys()) # build a new blank factor variableDomainsDict = bayesNet.getReducedVariableDomains(evidenceDict) #print variableDomainsDict #print queryVariables #print evidenceList newFactor = Factor(queryVariables, evidenceList, variableDomainsDict) #print newFactor # sample numSamples times for idx in range(numSamples): weight = 1.0 assignmentDict = {} for variable in variableList: factor = bayesNet.getCPT(variable) #print factor if (variable in evidenceList): assignmentDict[variable] = evidenceDict[variable] prob = factor.getProbability(assignmentDict) #print 'Prob: ', prob weight *= prob else: newDict = sampleFromFactor(factor, assignmentDict) # update assignment dict for key in newDict: assignmentDict[key] = newDict[key] #print 'new assignment dict: ', assignmentDict # what to do with final Assignment and weight? finalAssignment = assignmentDict #print finalAssignment currentProb = newFactor.getProbability(finalAssignment) newProb = currentProb + weight newFactor.setProbability(finalAssignment, newProb) # normalize queryConditionedOnEvidence = normalize(newFactor) return queryConditionedOnEvidence
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" #create return factor newFactor = Factor(queryVariables, evidenceDict, bayesNet.variableDomainsDict()) reducedVariableDomains = bayesNet.getReducedVariableDomains(evidenceDict) newFactor = newFactor.specializeVariableDomains(reducedVariableDomains) for i in range(numSamples): weight = 1.0 allAssignments = {} allAssignments.update(evidenceDict) for var in bayesNet.linearizeVariables(): tmpCPT = bayesNet.getCPT(var) if var in evidenceDict: #accumulate weight weight *= tmpCPT.getProbability(allAssignments) else: newAssignment = sampleFromFactor(tmpCPT, allAssignments) allAssignments.update(newAssignment) #accumulate sample p = newFactor.getProbability(allAssignments) newFactor.setProbability(allAssignments, p + weight) return normalize(newFactor) util.raiseNotDefined()
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" # create return factor newFactor = Factor(queryVariables, evidenceDict, bayesNet.variableDomainsDict()) reducedVariableDomains = bayesNet.getReducedVariableDomains(evidenceDict) newFactor = newFactor.specializeVariableDomains(reducedVariableDomains) for i in range(numSamples): weight = 1.0 allAssignments = {} allAssignments.update(evidenceDict) for var in bayesNet.linearizeVariables(): tmpCPT = bayesNet.getCPT(var) if var in evidenceDict: # accumulate weight weight *= tmpCPT.getProbability(allAssignments) else: newAssignment = sampleFromFactor(tmpCPT, allAssignments) allAssignments.update(newAssignment) # accumulate sample p = newFactor.getProbability(allAssignments) newFactor.setProbability(allAssignments, p + weight) return normalize(newFactor) util.raiseNotDefined()
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" # print ("evidence",evidenceDict) unconditioned = evidenceDict.keys() sampleToWeightList = [] #list of tuples, each tuple is of type(dictionary, weight) weights = [] count = 0 reduced = bayesNet.getReducedVariableDomains(evidenceDict) # itemsToRemove =[] # for item in reduced: # print item # if (item not in queryVariables) or (item not in evidenceDict): # print "removed" # itemsToRemove.append(item) # for item in itemsToRemove: # reduced.pop(item) # print ("reduced", reduced) # print ("queryVariables", queryVariables) # print("linearized", bayesNet.linearizeVariables()) # print ("BAYES NET") # print bayesNet sample = {} while count != numSamples: sample = {} weight = 1 for variable in bayesNet.linearizeVariables(): # print ("variable", variable) # print("sample", sample) factor = bayesNet.getCPT(variable) if variable in evidenceDict.keys(): # print ("presample", sample) sample.update({variable: evidenceDict[variable]}) # print("post", sample) prob = factor.getProbability(sample) # print ("prob", prob) weight = weight * prob else: assignmentDict = sampleFromFactor(factor, sample) sample.update(assignmentDict) # print ("appended", assignmentDict) # print (sample) # print "==========" tup = (sample, weight) sampleToWeightList.append(tup) count += 1 # print("====================================") # conditioned = queryVariables # conditionedFactor = Factor(queryVariables, [], bayesNet.variableDomainsDict()) # print "conditionedFactor" # print conditionedFactor # conditionedDomain = conditionedFactor.variableDomainsDict() # print ("conditionedDomain", conditionedDomain) # unconditionedFactor = Factor(unconditioned, [], bayesNet.variableDomainsDict()) # unconditionedDomain = unconditionedFactor.variableDomainsDict() # print "unconditionedFactor" # print unconditionedFactor # print ("unconditionedDomain", unconditionedDomain) # print "AAAAAAA" # print sampleToWeightList answer = Factor(queryVariables, unconditioned, reduced) #print ("answer domain ", answer.variableDomainsDict()) for assignment in answer.getAllPossibleAssignmentDicts(): print assignment sum0 = 0 for tup in sampleToWeightList: if all(item in tup[0].items() for item in assignment.items()): #if "assignment" is a subset of the sample, which is stored in the tuple in the list of tuples sum0 += tup[1] # print ("sum0", sum0) # print ("set assignment:") # print assignment # print "to be " # print answer.getProbability(assignment) + sum0 answer.setProbability(assignment, answer.getProbability(assignment) + sum0) # print answer answer = normalize(answer) return answer
def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 6: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors Query: ['D'] eliminateOrder: ['T'] evidenceDict : W {'W': 'sun'} Variables: D, W, T Edges: W -> D, W -> T cpt [Factor(set(['D']), set(['W']), {'D': ['dry', 'wet'], 'W': ['sun'], 'T': ['hot', 'cold']}), Factor(set(['W']), set([]), {'D': ['dry', 'wet'], 'W': ['sun'], 'T': ['hot', 'cold']}), Factor(set(['T']), set(['W']), {'D': ['dry', 'wet'], 'W': ['sun'], 'T': ['hot', 'cold']})] join -> eliminate -> join -> eliminate ### """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking( callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" cpts = bayesNet.getAllCPTsWithEvidence(evidenceDict) for join in eliminationOrder: cpts, joinedFactor = joinFactorsByVariable(cpts, join) if (len(joinedFactor.unconditionedVariables()) == 1): continue else: #eliminate eliminateVariable = join eliminatedFactor = eliminate(joinedFactor, eliminateVariable) cpts.append(eliminatedFactor) factor = joinFactors(cpts) normalizeFactor = normalize(factor) return normalizeFactor
def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 6: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking( callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" #util.raiseNotDefined() """ # print "@@@@@queryVariables ", queryVariables # print "@@@@@evidenceDict ", evidenceDict # print "@@@@@eliminationOrder ", eliminationOrder # print "@@@@@bayesNet ", bayesNet # currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) # print "@@@@@currentFactorsList ", currentFactorsList for joinVariable in eliminationOrder: # print "@@@@@joinVariable ", joinVariable currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, joinVariable) # print "@@@@@currentFactorsList ", currentFactorsList # print "@@@@@joinedFactor ", joinedFactor if len(joinedFactor.unconditionedVariables()) > 1: eliminatedFactor = eliminate(joinedFactor, joinVariable) currentFactorsList.append(eliminatedFactor) # print "@@@@@currentFactorsList ", currentFactorsList joinedFactor = joinFactors(currentFactorsList) normalizedFactor = normalize(joinedFactor) return normalizedFactor """ # grab all factors where we know the evidence variables (to reduce the size of the tables) # pass by evidenceDict through BayesNet.getAllCPTsWithEvidence factors = bayesNet.getAllCPTsWithEvidence(evidenceDict) # join all factors by eliminationOrder for join_Factor in eliminationOrder: not_joined_Factor, joined = joinFactorsByVariable( factors, join_Factor) factors = not_joined_Factor # currentFactorsList should contain the connected components of the graph now as factors, must join the connected components joined_unconditioned = joined.unconditionedVariables() if len(joined_unconditioned) > 1: temp_factor = eliminate(joined, join_Factor) # eliminate not_joined_Factor.append(temp_factor) joinedFactors = joinFactors(not_joined_Factor) return normalize(joinedFactors) # normalize the joinedFactors
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 7: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" 'sample = None' setsOfUnconditioned = set(queryVariables) setsOfConditioned = set(evidenceDict.keys()) vDD = bayesNet.getReducedVariableDomains(evidenceDict) newFactor = Factor(setsOfUnconditioned, setsOfConditioned, vDD) sample = evidenceDict.copy() for i in range(numSamples): weight = 1.0 for v in bayesNet.linearizeVariables(): f = bayesNet.getCPT(v) 'f = f.specializeVariableDomains(vDD)' if v in evidenceDict.keys(): weight = weight * f.getProbability(sample) '''s = 0.0 sumAD = 0.0 for ad in f.getAllPossibleAssignmentDicts(): if all(item in ad.items() for item in evidenceDict.items()): s += f.getProbability(ad) sumAD += f.getProbability(ad) weight = weight * s / sumAD''' else: sample.update(sampleFromFactor(f, sample)) newFactor.setProbability(sample, newFactor.getProbability(sample) + weight) sample = evidenceDict.copy() 'sample = None' 'pdb.set_trace()' '''totalSum = sum(sampleDict.values()) all(item in aD.items() for item in ad.items()):''' newFactor = normalize(newFactor) return newFactor
def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 4: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking(callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" evidenceVariablesSet = set(evidenceDict.keys()) queryVariablesSet = set(queryVariables) eliminationVariables = set(eliminationOrder) currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) for i in range(len(eliminationOrder)): '''eviDict = {eliminationOrder[i]:evidenceDict[eliminationOrder[i]]} currentFactorsList = bayesNet.getAllCPTsWithEvidence(eviDict)''' # join factors by variable joinVariable = eliminationOrder[i] currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, joinVariable) # marginalize the variable if joinedFactor.unconditionedVariables() != set(joinVariable): joinedFactor = eliminate(joinedFactor, joinVariable) currentFactorsList.append(joinedFactor) # currentFactorsList should contain the connected components of the graph now as factors, must join the connected components fullJointOverQueryAndEvidence = joinFactors(currentFactorsList) queryConditionedOnEvidence = normalize(fullJointOverQueryAndEvidence) return queryConditionedOnEvidence 'pdb.set_trace()' util.raiseNotDefined()
def inferenceByLikelihoodWeightingSampling(bayesNet, queryVariables, evidenceDict, numSamples): """ Question 6: Inference by likelihood weighted sampling This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by performing likelihood weighting sampling. It should sample numSamples times. In order for the autograder's solution to match yours, your outer loop needs to iterate over the number of samples, with the inner loop sampling from each variable's factor. Use the ordering of variables provided by BayesNet.linearizeVariables in your inner loop so that the order of samples matches the autograder's. There are typically many linearization orders of a directed acyclic graph (DAG), however we just use a particular one. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. numSamples: The number of samples that should be taken. Useful functions: sampleFromFactor normalize BayesNet.getCPT BayesNet.linearizeVariables """ sampleFromFactor = sampleFromFactorRandomSource(randomSource) "*** YOUR CODE HERE ***" linearizeVars = bayesNet.linearizeVariables() varDomainsDict = bayesNet.variableDomainsDict() # create a new factor queryEvidenceDomainsDict = {} for query in queryVariables: queryEvidenceDomainsDict[query] = varDomainsDict[query] for evidence, value in evidenceDict.items(): queryEvidenceDomainsDict[evidence] = [value] sampleFactor = Factor(queryVariables, evidenceDict, queryEvidenceDomainsDict) #number of samples to take for x in xrange(1,numSamples+1): w = 1.0 sampleVars = {} #vars set and encountered so far, including both the unconditioned and conditioned variables for var in linearizeVars: if var in evidenceDict.keys(): #if var is an evidence variable factor = bayesNet.getCPT(var) sampleVars[var] = evidenceDict[var] probability = factor.getProbability(sampleVars) w = w * probability else: #take a sample sampleDict = sampleFromFactor(bayesNet.getCPT(var), sampleVars) sampleVars.update(sampleDict) #update the corresponding row in the factor sampleFactor.setProbability(sampleVars, w+(sampleFactor.getProbability(sampleVars))) return normalize(sampleFactor) util.raiseNotDefined()
def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 6: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking( callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" evidenceVariablesSet = set(evidenceDict.keys()) queryVariablesSet = set(queryVariables) eliminationVariables = (bayesNet.variablesSet() - evidenceVariablesSet) - queryVariablesSet # grab all factors where we know the evidence variables (to reduce the size of the tables) newfactor = None currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) for var in eliminationOrder: currentFactorsList, newfactor = joinFactorsByVariable( currentFactorsList, var) if len(newfactor.unconditionedVariables()) == 1: pass else: newfactor = eliminate(newfactor, var) currentFactorsList.append(newfactor) retfactor = joinFactors(currentFactorsList) retfactor = normalize(retfactor) return retfactor
def inferenceByVariableElimination(bayesNet, queryVariables, evidenceDict, eliminationOrder): """ Question 6: Your inference by variable elimination implementation This function should perform a probabilistic inference query that returns the factor: P(queryVariables | evidenceDict) It should perform inference by interleaving joining on a variable and eliminating that variable, in the order of variables according to eliminationOrder. See inferenceByEnumeration for an example on how to use these functions. You need to use joinFactorsByVariable to join all of the factors that contain a variable in order for the autograder to recognize that you performed the correct interleaving of joins and eliminates. If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. The sum of the probabilities should sum to one (so that it is a true conditional probability, conditioned on the evidence). bayesNet: The Bayes Net on which we are making a query. queryVariables: A list of the variables which are unconditioned in the inference query. evidenceDict: An assignment dict {variable : value} for the variables which are presented as evidence (conditioned) in the inference query. eliminationOrder: The order to eliminate the variables in. Hint: BayesNet.getAllCPTsWithEvidence will return all the Conditional Probability Tables even if an empty dict (or None) is passed in for evidenceDict. In this case it will not specialize any variable domains in the CPTs. Useful functions: BayesNet.getAllCPTsWithEvidence normalize eliminate joinFactorsByVariable joinFactors """ # this is for autograding -- don't modify joinFactorsByVariable = joinFactorsByVariableWithCallTracking( callTrackingList) eliminate = eliminateWithCallTracking(callTrackingList) if eliminationOrder is None: # set an arbitrary elimination order if None given eliminationVariables = bayesNet.variablesSet() - set(queryVariables) -\ set(evidenceDict.keys()) eliminationOrder = sorted(list(eliminationVariables)) "*** YOUR CODE HERE ***" """ this section is heavily based on the "inferenceByEnumeration" section. In "inferenceByEnumeration" section we would join all "joinVariable" to the "currentFactorsList" according to: for joinVariable in bayesNet.variablesSet(): currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, joinVariable) currentFactorsList.append(joinedFactor) and then eliminate by: for eliminationVariable in eliminationVariables: incrementallyMarginalizedJoint = eliminate(incrementallyMarginalizedJoint, eliminationVariable) fullJointOverQueryAndEvidence = incrementallyMarginalizedJoint and then normalize so that the probability sums to one queryConditionedOnEvidence = normalize(fullJointOverQueryAndEvidence) and then return "queryConditionedOnEvidence" NOW, for "inferenceByVariableElimination" we would heavily use the same method but instead of adding all of them together and then removing all together, we would add one and then remove the same: to achieve this task we would loop through all the "eliminationOrder" and form a "currentFactorsList" with the "joinedFactor" and "elimVariable" currentFactorsList, joinedFactor = joinFactorsByVariable(currentFactorsList, elimVariable) then I would eliminate "elimVariable" from "joinedFactor" as long as there is at least more than 1 variable in the unconditioned in the "joinedFactor". I am not fully sure why I have to do it but the comments above say: If a factor that you are about to eliminate a variable from has only one unconditioned variable, you should not eliminate it and instead just discard the factor. This is since the result of the eliminate would be 1 (you marginalize all of the unconditioned variables), but it is not a valid factor. So this simplifies using the result of eliminate. I guess the reason is that if there is only one unconditioned in the joint and we remove them there would be no unconditioned left in joint. if there is only 1 unconditioned this is what happens: P(A|B).P(B) = P(A,B) --> end of it --> cant eliminate B from P(A,B) once the currentfactorList is updated a new factors list is generated and the normalized result is returned. finalFactorsList = joinFactors(currentFactorsList) finalFactorsListNormalized = normalize(finalFactorsList) """ #util.raiseNotDefined() "generate the current factor list. this list will be updated on the road" currentFactorsList = bayesNet.getAllCPTsWithEvidence(evidenceDict) "loop through the elimination parameters and thenjoin and remove" for elimVariable in eliminationOrder: #print currentFactorsList "add the elimVariable and generate join" currentFactorsList, joinedFactor = joinFactorsByVariable( currentFactorsList, elimVariable) #print joinedFactor #print currentFactorsList "if unconditioned are larger or equal to 2 then remove from joinedFactor" numUncontiotioned = len(joinedFactor.unconditionedVariables()) if numUncontiotioned >= 2: incrementallyMarginalizedJoint = eliminate( joinedFactor, elimVariable) currentFactorsList.append(incrementallyMarginalizedJoint) #print "incrementallyMarginalizedJoint", incrementallyMarginalizedJoint #print currentFactorsList "otherwise this is the end of the add and remove --> do nothing" "generate final factor list join out of the updated currentFactorsList" finalFactorsList = joinFactors(currentFactorsList) #print finalFactorsList "normalize" finalFactorsListNormalized = normalize(finalFactorsList) return finalFactorsListNormalized