This is the multidimensional 0-1 knapsack problem, which is NP-hard.
An overview of solution methods can be found here, a relatively recent research paper here and a genetic algorithm implementation in python here.
Taken from the python implementation (link pyeasyga above) is this example:
from pyeasyga import pyeasyga
# setup data
data = [(821, 0.8, 118), (1144, 1, 322), (634, 0.7, 166), (701, 0.9, 195),
(291, 0.9, 100), (1702, 0.8, 142), (1633, 0.7, 100), (1086, 0.6, 145),
(124, 0.6, 100), (718, 0.9, 208), (976, 0.6, 100), (1438, 0.7, 312),
(910, 1, 198), (148, 0.7, 171), (1636, 0.9, 117), (237, 0.6, 100),
(771, 0.9, 329), (604, 0.6, 391), (1078, 0.6, 100), (640, 0.8, 120),
(1510, 1, 188), (741, 0.6, 271), (1358, 0.9, 334), (1682, 0.7, 153),
(993, 0.7, 130), (99, 0.7, 100), (1068, 0.8, 154), (1669, 1, 289)]
ga = pyeasyga.GeneticAlgorithm(data) # initialise the GA with data
ga.population_size = 200 # increase population size to 200 (default value is 50)
# define a fitness function
def fitness(individual, data):
weight, volume, price = 0, 0, 0
for (selected, item) in zip(individual, data):
if selected:
weight += item[0]
volume += item[1]
price += item[2]
if weight > 12210 or volume > 12:
price = 0
return price
ga.fitness_function = fitness # set the GA's fitness function
ga.run() # run the GA
print ga.best_individual() # print the GA's best solution
the last dimension of data is price and the two other dimensions are weight and volume.
You can adjust this example so that it solves problems with more than two dimensions.
I hope that helps.
EDIT: The genetic algorithm does not guarantee in general that it finds the optimal solution. For three constraints it will probably find good solutions, but there is no guarantee of optimality.
UPDATE: Mathematical Optimization Solution
One other option is to use PuLP, an open source modeling framework for mathematical optimization problems. This framework invokes a solver, i.e., a piece of software designed specifically to solve optimization problems. In a nutshell, the job of the framework is to link the mathematical problem description with the form it needs to have when solved, and the job of the solver is to actually solve the problem.
You can install pulp with e.g, pip (pip install pulp).
Here is the previous example modeled in pulp, by modifying this example:
import pulp as plp
# Let's keep the same data
data = [(821, 0.8, 118), (1144, 1, 322), (634, 0.7, 166), (701, 0.9, 195),
(291, 0.9, 100), (1702, 0.8, 142), (1633, 0.7, 100), (1086, 0.6, 145),
(124, 0.6, 100), (718, 0.9, 208), (976, 0.6, 100), (1438, 0.7, 312),
(910, 1, 198), (148, 0.7, 171), (1636, 0.9, 117), (237, 0.6, 100),
(771, 0.9, 329), (604, 0.6, 391), (1078, 0.6, 100), (640, 0.8, 120),
(1510, 1, 188), (741, 0.6, 271), (1358, 0.9, 334), (1682, 0.7, 153),
(993, 0.7, 130), (99, 0.7, 100), (1068, 0.8, 154), (1669, 1, 289)]
w_cap, v_cap = 12210, 12
rng_items = xrange(len(data))
# Restructure the data in dictionaries
items = ['item_{}'.format(i) for i in rng_items]
weight = {items[i]: data[i][0] for i in rng_items}
volume = {items[i]: data[i][1] for i in rng_items}
price = {items[i]: data[i][2] for i in rng_items}
# Make the problem, declare it as a maximization problem
problem_name = "3D Knapsack"
prob = plp.LpProblem(problem_name, plp.LpMaximize)
# Define the variables
plp_vars = plp.LpVariable.dicts('', items, 0, 1, plp.LpInteger)
# Objective function
prob += plp.lpSum([price[i]*plp_vars[i] for i in plp_vars])
# Constraints
prob += plp.lpSum([weight[i]*plp_vars[i] for i in plp_vars]) <= w_cap
prob += plp.lpSum([volume[i]*plp_vars[i] for i in plp_vars]) <= v_cap
# Solution
prob.solve()
# If you want to save the problem formulation in a file
# prob.writeLP(problem_name + 'lp')
# Each of the variables is printed with it's resolved optimum value
for v in prob.variables():
print v.name, "=", v.varValue
# The optimised objective function value is printed to the screen
print "Total gain = ", plp.value(prob.objective)
with an objective of 3,540.
A demonstration of how this runs is here.